Saturday, May 8, 2010

A week in M5

I spent the last week or so playing around with a relatively new simulator called M5.  It is a very nice simulator, especially for architectural research.  My interest in it is to use it for doing both operating system and architecture research.  This type of research is difficult to do with most simulators, because fully functional simulators rarely provide accurate timing to evaluate overheads, while timing simulators rarely provide sufficient functionality to run an operating system.  What is needed is a full system simulator that provides fidelity for both functionality and timing.

In the past our research group has used SimpleScalar for architectural research.  However, that project is obsolete and the simulator is slowly becoming obsolete with respect to real-world systems.  Also, it does not provide full system simulation, although a set of patches by Jack Whitham merges RTEMS and SimpleScalar to provide full system simulation for RTEMS.  I tried to use Jack's patches, but I could not quite get everything working.  When I sent him an e-mail to ask about it, he suggested that I should look at M5 as a more realistic simulator instead.  So I did.

M5 provides two types of simulators: Full System (FS) and Syscall Emulation (SE).


FS simulation is what I need, although it might be possible to layer experiments on an SE simulator using pthreads or something similar.  I have had trouble getting the UltraSPARC T1 SE (SPARC_SE) simulator to work with my RTEMS Sparc64 Port, although it might work with a little bit more massaging. However, I'm not sure it is sufficient for the type of research I want to do, particularly with RTEMS.  The main problem I see is that interrupts won't be delivered to the target (RTEMS), because the interrupt sources are not actually simulated.  This means there will be no timers and no pre-emptive multitasking, i.e. a very limited feature set.

According to the M5 wiki, the FS Simulator currently supports only two ISA's: DEC Alpha with up to 4 cores (64 cores on an alpha-derived CPU that has no real-world analog) and UltraSPARC T1 with 1 core.  These are referred to as ALPHA_FS and SPARC_FS respectively.

M5 also provides a variety of CPU models, with varying capabilities.
  • AtomicSimpleCPU model is basic functionality only.
  • TimingSimpleCPU provides CPI=1 with stalls on loads.
  • O3CPU provides a time-accurate out-of-order pipeline model
  • InOrderCPU provides a time-accurate in-order pipeline model
  • Checker is used to provide functional correctness.
As far as I can tell, SPARC_FS so far officially only works on AtomicSimpleCPU. It might work with TimingSimpleCPU, but that would be as good as I can do with Simics Niagara and the Ruby module from Wisconsin GEMS.  Unfortunately, I haven't actually been able to get the TimingSimpleCPU to work with SPARC_FS.  This means SPARC_FS is not very useful for doing architecture research.  However, as an open source simulator, it is a compelling project for helping to ensure that the Sparc64 RTEMS port can be tested.

ALPHA_FS appears to be capable of running on any of the CPU models, except for InOrderCPU.  According to the mailing list, full system is not implemented for any of the InOrder platforms.  However, RTEMS does not support the DEC Alpha ISA at all, and I'm not aware of any interest in it as a port.

Booting OpenSolaris on M5 SPARC_FS
Follow the instructions on the M5 wiki for getting started.  The examples given are for the Alpha targets.  You will also want to compile and install the m5term application.

To build the sparc full system targets, use:
$ scons build/SPARC_FS/m5.debug 
$ scons build/SPARC_FS/m5.opt
$ scons build/SPARC_FS/m5.prof
$ scons build/SPARC_FS/m5.fast
Download and extract the OpenSparc OpenSparc architecture and performance modeling tools. Copy *.bin and nvram1 from OpenSPARCT1_Arch.1.5/S10image/ to the /dist/m5/system/binaries/ directory.  Also copy disk.s10hw2 from the S10image/ directory to the /dist/m5/system/disks/ directory.  Rename reset.bin, q.bin, and openboot.bin to reset_new.bin, q_new.bin, and openboot_new.bin, which are the binaries expected by the m5 SPARC_FS scripts.

In a terminal window, start the simulator with:
$ build/SPARC_FS/m5.debug -d /tmp/output configs/example/fs.py
In another terminal, connect to the simulator with:
$ m5term localhost 3457
 You should eventually see this in your m5term window:
==== m5 slave terminal: Terminal 0 ====
Sun Fire T2000, No Keyboard
Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.20.0, 256 MB memory available, Serial #1122867.
[mo23723 obp4.20.0 #0]
Ethernet address 0:80:3:de:ad:3, Host ID: 80112233.



ok
 The "ok" prompt is the OpenBoot prompt.  Just type boot and press enter, and the OpenSparc Solaris image will start to boot.

Booting RTEMS on M5 SPARC_FS
The sparc64 sun4v BSP will boot on the SPARC_FS full system simulator of M5.  The first step is to bundle RTEMS executables and SILO on to a bootable ISO9660 filesystem.  We have some scripts to help create the bootable disk, and use the same approach for booting RTEMS on Simics Niagara.  Next change the ${m5}/configs/common/FSConfig.py file and replace disk('disk.s10hw2') with disk('image.iso'), where image.iso is the ISO9660 filesystem image built for booting RTEMS.  M5 will look in /dist/m5/system/disks for the image.iso file, so link image.iso to the /dist/m5/system/disks/image.iso location.

I have so far verified that Hello World (hello) and the ticker will boot and execute to completion.  However, there is no facility built into M5 to set OpenBoot parameters, in particular I cannot set auto-boot = true; I would need to create an nvram1 with this setting.  I could probably write a script to drive the interactive portions of the m5term dialog, maybe even with a configuration checkpoint at the SILO boot prompt.

There are a few other tasks left that would be nice to resolve.  First, the simulator does not exit when the RTEMS application finishes.  This could be resolved by a script to detect the end of execution, which is what is done now with Simics, or by adding some instructions to cause the application to terminate.  Second, it might be nice to get RTEMS to run on SPARC_SE, although the functionality that would be provided on this platform is questionable.  Third, getting things to work on the InOrderCPU model would be ideal, although that functionality is not currently supported in m5 for SPARC_FS.  Fourth, I was not able to hook up the remote debugger (gdb) without getting some strange errors that are probably related to the flavor of gdb and the ABI assumed by M5's UART.  This last task was the result of a problem I faced trying to get SPARC_SE to work.

Overall, I found it was easy to compile and build the M5 simulator.  I also got a chance to go into the source, but I did not go very far so I will withhold judgement there.  I am excited that the basic AtomicSimpleCPU SPARC_FS is able to boot and run RTEMS, which will help get the Sparc64 RTEMS Port accepted, included, and maintained in the RTEMS code base.

No comments:

Post a Comment