- Processor simulation in Simulink
- What's so great about this model?
- It's really a tour of Turing
- Wait, I can use a program to write a program that runs a program that I have written in my own language?
- What in the name of Thor's Hammer is HDL?
- This seems confusing...
- What's your take on processor simulation?
Ever wonder how a microprocessor runs a program to perform computations? Better yet, how would you go about building and implementing such a design?
Mikhail's example is simple and based on one published in a book. But it is a well laid out description of the computational process model using Simulink® that supports both the simulation of the program on your desktop machine, as well as the generation of HDL-code to deploy the design onto an FPGA.
To me this is just super cool. Perhaps that says more about me than this File Exchange entry. I'm sure there's some marketing message I should be spouting about how important simulation is to design. But really, what grabbed my attention is
Ultimately this means I could create my own processor and programming language.
I very much like understanding how things work. I have thought a great deal about processors, but perhaps hadn't considered how they actually work in too much detail. After all, my background is mechanical engineering and not computer science.
However... back in university, I took a philosophy course on artificial intelligence, and we spent a good deal of time developing and discussing Turing machines. (These have come back into the spotlight recently with the release of the movie about Alan Turing).
The basic premise behind a Turing machine is that an instruction can read a datum, perform some action with that datum, and write a resulting datum into some location that can store the information.
That is precisely what this model describes. I suppose that's good, because a microprocessor is a Turing machine (well, a finite version of a Turing machine)
Wait, I can use a program to write a program that runs a program that I have written in my own language?
It seems like there's some sort of circular dependency here, but the short answer is yes, you can do that. In fact Alan Turing basically proved this notion with the Universal Turing Machine.
You can develop an algorithm that represents fundamental operations. In this case the algorithm can be implemented on hardware by generating HDL from the Simulink model.
Hardware description language (HDL) is a textual language to determine how a piece of electronics will operate by describing the structure of that device.
In terms of silicon devices, it describes data pathways through which bits will flow and be stored. Field programmable gate arrays (FPGAs) are unique silicon fabrications that can be "reconfigured" to a different set of data pathways (as opposed to your computer's processor which has a fixed set of data pathways).
In fact, processor designers and developers use HDL to create and implement processor designs. Often HDL appears as a very low-level language because you are often dealing at the level of individual bits.
It's clear that these concepts are quite abstract, which is why I found the MIPS model so interesting. I could begin to deconstruct how the processor will ultimately parse and respond to an instruction.
I used the fact this algorithm is in Simulink to interrogate different aspects of the processor language design such as,
- What is the current processor instruction at any given time in the program?
Have you ever built your own processor? Would you be interested in simulating how a processor works? Do you believe this approach in Simulink is scalable?
Leave your comments here.
要发表评论，请点击 此处 登录到您的 MathWorks 帐户或创建一个新帐户。