The overall effect of this is to increase system functional coverage, typically greater than 90% from numbers that are characteristically far lower. They can move data among memory segments to stress coherency algorithms, and coordinate with the I/O transactions when data should be sent to the chip’s inputs or read from its outputs. Multi-threaded, multi-processor test cases can exercise all parallel paths within the design to verify concurrency. Generated C test cases can exercise more of the SoC’s functionality than handwritten tests and will seek out hard-to-imagine complex corner cases.
Of course, automatically generated C tests will make more efficient use of engineering resources. The result is that aspects of SoC behavior, such as concurrent operations and coherency, are minimally verified.
It is unusual for handwritten tests, especially without the aid of OS services, to run in a coordinated way across multi-core processors leveraging multiple threads.
The performance of these verification platforms is not good enough to run a full operating system (OS), so these tests execute “bare-metal,” which adds a significant overhead to composition effort. SoC verification engineers recognize the limitations of constrained-random testbenches, driving them to handwrite C tests to run on the processors for both simulation and hardware emulation, even though they are limited in fully exercising the SoC design. In fact, to use UVM on an SoC, the processors are often removed and replaced by virtual inputs and outputs onto the SoC bus allowing the sub-system minus the processor to be verified.
UVM and other constrained-random approaches do not account for code running on the processors. To properly verify an SoC, the processors themselves must be exercised. It is hard to extend this methodology to the system level, given the need to combine processor test code with I/O transactions, often executed on an emulator or prototyping system. While these have increased verification efficiency at the hardware block level, the design is still perceived as a black box with stimulus, checks and coverage code written separately, still an onerous and error-prone task for large blocks. Constrained Random techniques, for example, in a Universal Verification Methodology (UVM) testbench, make use of randomized test vectors directed at specific scenarios to increase coverage. To illustrate this using the bucket and balls analogy, we colour code the different sizes pink, yellow, green and blue respectively, as shown below.As system-on-chip (SoC) designs proceed on their march to greater complexity, test suites containing thousands of lines of code for system-level verification continue to be written by hand, a quaintly old school and ineffective practice defying the adage “automate whenever possible.” This is especially true for C tests that run on an SoC’s embedded processors to verify the entire device prior to fabrication.Īutomating verification test composition where possible has been shown to increase productivity for many phases of SoC development. Thus, we decide to transmit exactly 3 packets with payload size 0, 2 packets with payload sizes between 1 and 2, 8 packets with payload sizes between 3 and 255, and 2 packets with a payload size of 256 bytes. In summary, we want a certain distribution of these different payload sizes, and a minimum number of total packets to transmit. Let us assume the payload could be anything from 0 to 256 bytes, and that the corner cases of 0, 1, 2 and 256 should be properly checked, whereas any payload size in the range 2-255 are assumed to be less buggy, so we only need some of these sizes. Let us correlate this to generating random numbers for a testbench, so let me use the payload size of some protocol as an example. This means we know the distribution between red and blue balls - even for only 5 picks. The really nice thing about randomisation without replacement is that we know for sure that after 5 picks, we have picked exactly 2 blue and 3 read balls - in random order. For randomisation without replacement on the other hand, there is now one blue ball less in the bucket, and thus for the next pick the probability of picking a blue ball has been reduced to 25%.
Then if we do normal randomisation (with replacement) the probability will be 40% for every single pick. In this simple example we see that initally the probability of picking a blue ball is 40%.