Simulation – better than the real thing?
At Mentor Graphics I have to be careful what I say – I am treading on thin ice. The problem is that I am a software engineer. A very large proportion of my colleagues in the company have a hardware design background. Now I would not say that the two disciplines are at war, but there has always been a tension between hardware and software developers. In an embedded design, if something goes wrong, both parties tend to assume that the other is at fault. Worse still, if a hardware design flaw is located late in the development process, it may be to late to fix it economically, so the only option is to accommodate the problem in software. And gosh, does that rankle.
So, I tend to regard hardware as a necessary evil that allows me to run my software. It is probably no surprise, therefore, to learn that a favourite technology of mine is simulation …
The first issue is what does the term “simulation” mean? If a customer asks me whether we have a simulation solution, I always quiz them to ascertain exactly what they are looking for. Broadly, a simulator for embedded software development is some kind of environment which enables software to be run without having the final hardware available. This can be approached in a number of ways:
- Logic simulation – The hardware logic is simulated at the lowest level [gates]. Although this is ideal for developing the hardware design [using a tool like Mentor’s ModelSim], modeling a complete embedded system and executing code would be painfully slow.
- Hardware/Software Co-verification – Using an instruction set simulator [see below] linked to a logic simulator [see above], a compromise performance may be achieved [using a tool like Mentor’s Seamless]. This makes sense as, typically, the CPU design is fully proven, so having a gate-accurate model is overkill.
- Instruction Set Simulation – An ISS reads the code, instruction by instruction, and simulates the execution on the chosen CPU. It is much slower than real time, but very precise and can give useful execution time estimates. Typically the CPU simulator is surrounded by functional models of peripheral devices. This can be an excellent approach for debugging low level code like drivers.
- Host [Native] Code Execution – Running the code on the host [desktop] computer delivers the maximum performance – often exceeding that of the real target hardware. For it to be effective, the environment must offer functional models of peripherals and relevant integration with the host computer’s peripherals [like networking, serial I/O, USB etc.]. For larger applications, this approach, using tools like Mentor’s SimTest [see this paper], enables considerable progress to be made prior to hardware availability and offers an economic solution for larger, distributed teams.
These technologies are not competitive with one another. Each one offers a combination of precision and speed, which may be appropriate at different times in the design cycle. This can be visualized if I plot them on a graph.
You can change the rules and deviate from this nice relationship by “cheating” – i.e. using hardware acceleration, which is special electronics designed to “turbo-charge” the logic simulation [like Mentor’s Veloce range]. But that is no longer simulation – that is emulation, which is another story.
Comments
Leave a Reply
You must be logged in to post a comment.
Thanks a lot for this well written article. But I had trouble navigating through your site because I kept getting 502 bad gateway error. Just thought to let you know.