International Symposium on System-on-Chip
SoC | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015

General

Conference

Lodging / Travel


Valid XHTML 1.1

Valid CSS!

What are the future SoCs made of?

Chairman:
Prof. Jari Nurmi, Tampere University of Technology

Participants:
Brian Bailey, Mentor Graphics, USA 
Dave Machin, CoWare, UK
John Goodacre, ARM, UK
Jos Huisken, Silicon Hive, The Netherlands
Steven Leibson, Tensilica, USA
Günter Zeisel, PACT XPP, Germany

Introductions

The panel session was initiated by each participant with a short presentation on a subject they considered important under the topic.

Brian Bailey
Brian

Brian Bailey introduced himself as a verification specialist. He presented a graph illustrating complexity increase of fabrication, designs, software, as well as verification. According to Bailey, there exists no gap between the rate at which designer productivity and silicon capacity are growing. His argument was that any unused capacity available on chip could be filled with extra memory.

He also postulated that the real problems in the design process are software development and verification. Software causes a larger productivity bottleneck than hardware, since software productivity is not growing at a sufficient rate. However, the most significant bottleneck is caused by verification, which also suffers productivity problems including the lack of more abstract methods to verify systems. He continued, that contemporary chips fail because of improper functionality. Due to the new technologies and the complexity of modern systems, the situation is getting even worse in the future. Generally, the overall design time is increasingly being dedicated to verification. He also stated that before introducing new architectures it has to be known how to verify both the hardware and software contents.

According to Bailey, software development has been relying on the guarantee of repeatability. The guarantee of repeatability should be adopted for hardware development as well. Therefore, better verification environments are required. He argued that verification environments should be delivered with Intellectual Property blocks (IPs). Bailey concluded by emphasizing the importance of verification education and design verification reuse. It should be guaranteed that students are taught methods of good verification. In addition, verification reuse would give considerable productivity gains, since design reuse without verification reuse is close to useless.

Dave Machin

Dave Machin started his presentation by telling that conventional SoC is composed of DSP with some memory blocks as well as peripherals connected via a bus. However, the trend is to move to use systems consisting of many CPUs, dedicated processors, and memory subsystems. These new systems will be implemented applying a mix of commercial IPs and/or proprietary processing elements connected via a network.

Machin postulated that the most important aspect in the system design is a comprehensive design environment. As a solution, he presented CoWare tool set including LISA framework and ConvergenSC. LISA is capable of automating Application Specific Instruction set Processors (ASIPs) design, whereas ConvergenSC is a suitable simulation environment for quick evaluation of systemC models. These tools allow the co-design of HW and SW using a highly abstracted description of the system (Transaction Level Description or TLD) while obtaining cycle accurate results from simulation. He also believed that high performance simulation environment needs unified simulation environment.

John Goodacre
John

John Goodacre started his presentation by stating that future SoCs are made of IP blocks. He suggested putting multiple cores on a chip. To somehow manage the wide range of different cores, he classified cores into control plane cores and data plane cores. According to Goodacre, data plane cores require configurability for system-level reuse. Goodacre was confident that cores are going to lower abstraction levels, because putting together high level blocks is fairly difficult. Otherwise, there are simply too many configuration unknowns to be able to guarantee any degree of performance. He concluded that system assembly will be based on software requirements in the future.

Jos Huisken
Jos

Jos Huisken emphasized the importance of power budgeting. The desired performance has to be achieved within decided power budget. The power is largely determined by the wire activity. Huisken reminded that approximately 75 % of the overall power is consumed in wires with 120 nm technologies. Therefore, he stressed the importance of reference locality. It is recommended to apply local registers and distributed register files with short local connections. In other words, storage has to be located close to the computation.

Besides locality of references, Huisken mentioned heterogeneous nature of the system being worth exploiting. Heterogeneity means domain specific instructions and specialization. Specialization of the functional units will be a key issue to achieve performance goals with the power budget. According to Huisken, future SoC designs will lie somewhere between CPU/DSP designs and ASIC designs. One of the most essential issues is to find a balance between system flexibility and efficiency. In conclusion, Huisken discussed about technology issues connected to power consumption. The presented issues include active power, supply switching, and leakage current, which was mentioned to be the most important.

Steven Leibson
Steve

Steven Leibson agreed with Brian Bailey about design gap and verification issues. Leibson concentrated on configurable processors and reconfigurable logic. At the beginning, he took an economical approach and presented the term ROI (Return Of Investment). It is calculated by dividing return by investment. According to Leibson, SoC flexibility equals cost reduction. SoC designer requires processor cores performing more tasks to reduce the need for custom-built RTL. The purpose is that one flexible chip is capable of performing various applications. In addition, he strongly believed in software programmability rather than custom-built hardware. He proposed SoCs utilizing technology termed a sea of processors, in which the basic building block is a simplified processor. Leibson claimed that processor is the transistor of today and that there has been an exponential processor growth per chip and that trend will continue. He saw that the future SoCs are made of thousand processors.

Günter Zeisel
Gunter

Günter Zeisel suggested that reconfigurable SoCs (ReSoCs) will be built in the future. Zeisel listed various areas of real challenge for future SoC. According to Zeisel, the challenge of nanotechnology is the key issue with intolerable leakage currents existing at smaller than 100 nm technologies. However, the largest problems are leakage and power consumption. In addition, copper resistivity, capacitance, clock insertion, crosstalk, noise, as well as low power budget tend to reduce the signal to noise ratio. He also mentioned clock distribution as a crucial scheme, since it is very time-consuming to do. Other challenges mentioned were investment problems, standards, design flow, debugging, as well as system simulation.

He provided platform based design as a solution for many of the listed challenges. Systems will incorporate increasingly embedded memory, for example, for saving the state for recovery. The amount of memory available on-chip is a very important parameter. Interfacing an external memory is not recommended because it consumes too much power. He assumed that a reconfigurable system on chip should be software dominated. Since there will be no reconfigurable silicon for next ten years, utilization of a Real-Time Operating System (RTOS) and software are the keys for a system design.


Discussion

The discussion itself consisted of answering the questions from the audience and commenting the claims of the participants.

Lower level cores

The discussion began with a question addressed to John Goodacre concerning his claim about lower abstraction level. He explained the low level to refer to a lower granularity than a processor since smaller task-blocks are needed. His opinion was that a single core does not determine the whole application. Instead, it determines some lower part of the whole application. For example, the core does not determine complete video application, but it defines DCT.

Manage 1000 processor chips with 1 compiler

The idea of managing 1000 processor chips with 1 compiler was raised among the audience. There was a little debate about the issue. Others believed that symmetric multi-processor systems are easy to program and one processor can be considered as a single RTL block. However, others said that it would not work. It was also reminded that besides hardware the software content has to be verified on platforms in order to verify correct operation of the system. There was also a conversation about whether different processors in 1000 multiprocessor system are capable of doing different functions or not. Some of the panelists thought that functionality will be replicated and there is no need for a thousand distinct functionalities. They believed that programming different tasks for different processors would be the problem. Others, in turn, assumed that there must be at least some degree of heterogeneity between the blocks.

Caches in multiprocessor systems

The audience also wondered about the usage of own caches in multiprocessor systems. The general opinion was that caches should be avoided in multiprocessor systems. Caches are targeted for single processor systems. In addition, the approach of decoupling computation and communication was provided as an answer to timing problems.

Hardware and software design

The audience inquired whether it is safer to design everything in software and implement merely critical parts in hardware. The panelists considered the suggestion as a good approach. After a proper partitioning, the remaining problem is timing closure. However, the partition of the design caused disagreement. Others preferred designing with large blocks denoting that less time is spent designing blocks and more time performing simulation and verification. However, opposite opinion was that proposed design methodology consumes too much time for verifying the simulation models. Although future structures will be repetitive, the granularity of such a design still has to be determined. Formal verification was also mentioned to be hard. The formal model should be the only model, which will be used for all verification. Fortunately, current tools include increasingly verification.

Tools

The audience was interested whether the design tools are going to provide more visibility for the user or not. The answer was that tools facilitate only on evaluating the design decisions made by the user. The evaluation is performed by checking output and making a decision. The iteration is quick and most of the time is spent by the designer. Low level implementation details are not important any more, instead deep insight of system operation is currently required.

Optical connections

There was also a discussion about optical connections. Optical connections are widely experimented, but current CMOS technology is not compatible with optical connections.

FPGA

One debate considered FPGAs. Some of the panelists thought that FPGAs are too expensive to use, since the large portion of area is routing in FPGAs. There must be a trade-off between functionality, flexibility, and cost. Others, in turn, believed that FPGAs are not too expensive. FPGAs are increasingly resembling SoCs, since they are going towards general purpose including memories, multipliers and CPUs. Nowadays, the strategy of FPGA companies is to enter to the consumer markets. However, if a product is destined for high volume production, FPGA implementation is not a suitable option. For example, an FPGA implementation is in the order of ten times slower than an ASIC implementation. The real cost is in design, not in mask production.

Future SoC requirements

The chairman concluded the panel by asking participants to answer with one word to the question considering what is needed for future SoCs. Brian Bailey started by answering time, Dave Machin continued by demanding standards, whereas John Goodacre and Jos Huisken suggested money. Finally, Steven Leibson answered luck, while Günter Zeisel needed nothing.