International Symposium on System-on-Chip
SoC | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015

General

Conference

Lodging / Travel


Valid XHTML 1.1

Valid CSS!

SoC 2005 Panel Discussion Summary

Networks on Chip

Chairman:

Prof. Jari Nurmi, Tampere University of Technology

Participants:
Prof. Axel Jantsch, Royal Institute of Technology in Stockholm
Mr. Steve Leibson, Tensilica
Mr. Alain Fanet, Arteris
Mr. Günter Zeisel, StarCore

Prof. Gerard Smit, University of Twente
Prof. Jouni Isoaho, University of Turku

Discussion

The panel discussion consisted of answering the questions of the chairman and audience and commenting the claims of the participants. The chairman opened the panel discussion by explaining that a Network-on-Chip (NoC), which is working in zero time and consumes no power or resources, is not possible. The chairman asked the panelists, what would their dream network be like.

Dream network?

The panelists' opinions about a dream network are very divided. Axel Jantsch's dream network should have constant overhead in the whole system and it should be independent from its size. Steve Leibson points out, that people have usually no idea, what needs to be networked. Therefore, his dream network is an easy-to-realize -network, which contains only those things, which need to be networked. Both Alain Fanet and Günter Zeisel want to have a wireless network, Zeisel wants to concentrate on Networks-off-Chip. Fanet also points out that an ideal NoC could be suitable for any application. Gerard Smith and Jouni Isoaho emphasize scalability, Smit points out that a network needs to be predictable even if it is scalable, whereas Isoaho thinks that the dream network should be also autonomous and fault tolerant.

How can we convince the application designers to adopt the NoC approach?

Leibson claims that it is not our job to convince the application designers to adopt the NoC approach. Smit thinks that convincing application designers would be easy as long as the programmers know the unit distributed memories. Application programmers have to change their lives and get used to the idea of shared memory, he continues. Fanet adds that the software designers should be able to work with more memory space, subdivided in sub-spaces, each of them dedicated to different applications. This would lead to a simplified software. Smit then replies that the task of the NoC design should be to design not only the hardware, but also the related tools, so that we can rely on an ''automated'' working.

What about having a software and a NoC together to build an application or considering a SoC as a general purpose processor for applications?

Fanet would mix both of the views, he thoughts that software should be globally general and only locally optimized. Jantsch prefers a homogeneous general purpose application platform or a traditional SoC, in which hardware is customizable. He also points out that different platforms from different sources need to be integrated. Then the platforms can communicate over available communication services. Leibson continues that some people like FPGAs, some people like customizable systems and the choice is a matter of what you can afford and want to pay. Isoaho concludes that the task consists in doing things systematically, because we don't want to consider the whole thing at the same time, but we want to focus on each component separately. Then we should concentrate on optimizing an application for one IP block, and keep the NoC invisible.

If a NoC is invisible, is it the best approach? And if the interface is clear, does it benefit the standard?

Fanet is not sure about this, he thinks that the best thing is that one, which satisfies the needs. Leibson do not see the way to standardize at all. Jantsch thinks that platforms can be used by people who are familiar with them. Sometimes standards could be possible since they open possibilities and might help a lot, he continues. Smit has learned that the best thing is to get rid of the layers. It is not efficient to pack, then send and unpack. Therefore, people must be careful when building layers after layers. Zeisel points out that NoCs will not survive the next 10 years. His solution is a Network-near-Chip, with wireless connections and a GALS approach. But it is difficult to deal with many different clock domains. Isoaho continues by saying that SoCs, NoCs, and ASICs are just contemporary, complexity is growing no matter whether they are on- or off-Chip. We want that the systems are fault tolerant, so that they will work also when an IP component does not work. Smit agrees with Isoaho and predicts that we really need to look longer to the horizon: to find out a solution may last the next 10 years.

Future of guaranteed services in industry?

Smit explains that at Philips, 90% of the throughput is guaranteed, so it is guaranteed. This is good because DSP applications need guaranteed throughput and traffic. Leibson says, that at least video and multimedia, among huge number of other things, need guaranteed services and throughput. He emphasizes the need for guarantee for instance in biomedical applications. Fanet explains that at Arteris, their aim is to find out the customers' needs. He explains, that if 100% guarantee is not required, guaranteed services are more like a psychological aspect. They cannot guarantee, but they can be predictable instead, because predictability is the key issue and could help the application designer a lot. Zeisel justifies the need for guaranteed services by mentioning that for instance using a GPS, it is not acceptable not to know where we are. The service must be guaranteed that way, that the customers are having the feeling that they know the throughput. Fanet comments that guaranteeing throughput and maximum latency we can solve only a small part of the problem. We need a global guarantee, but the fact is that we don't know how to do that. Leibson emphasizes that guaranteed service and artificial need for it are two different things. Jantsch finally concludes that predictability is nice because then we have the possibility to build a system successfully, but usually we optimize the average case and not the worst one.

What is the price of guaranteed throughput?

Isoaho's opinion is, that predictability is always needed. If guarantee is wanted, systems must be dynamic. He is also wondering, who is taking the responsibility to guarantee something? Smit says that to guarantee the throughput we must optimize the system for the average case. This requires a lot of memory, so it costs a lot in terms of area and money. Jantsch then adds that we should look at one case in particular and optimize the system case by case.

What is a good benchmark for performance?

The panelists seem to have difficulties to answer this question. Zeisel sidesteps the question by answering that everything can be done but it is costly. Leibson continues that for instance processor benchmarking has been done a long time, but people still does not agree what is good. And if benchmarking NoCs, the situation is even worse. Fanet is wondering, how the network could be characterized. He thinks that if the job can be done with this amount of gates, with this amount of this and that resource, that is the best what can be done. He continues that tools to alleviate the estimations are needed and there are too many parameters to do the job just like that.

Zeisel suggests that one possibility could be to put together applications for specific ASICs, then try that with different NoCs. Isoaho is quite skeptic, because he thinks that to define a benchmark, we should know what kind of applications we will need in NoCs in the future. Fanet states that a benchmark in general tries to put together every application, but then we need to apply a given metric. We need a scale to evaluate the packet switching capacity, the power consumption, and so on. Moreover, characterizing performance is very difficult; we need to define a set of criteria. Smit concludes that we can then take an application field, and use the most popular applications for that as a benchmark.

How do you see the future of NoCs? Are they application specific or standard abstractions?

Zeisel starts by saying that for sure they are standard. Standard is better because we do not have to worry about an application in particular, but about a more general concept. For example, we will then worry about motion estimation, and not about H.264, and so on. Jantsch agrees and continues that the future NoCs are more standardized general purpose applications. We expect from a NoC that it's an abstraction, and then we customize it. Also Smit agrees. He thinks that application programmers can define their modules and then they will connect them together. Fanet instead has an opposite view. He thinks that when standardizing, we will loose the competitivity of the product. Even though it is true that we need a standard network interface for a NoC, we cannot go too far. Standards kill the competition, and we need competition. Also Leibson says the same, reporting as motivation the commercial reasons and the fact that usually it takes years to define a standard. In particular, he observes that if we have not been able to get a standard for buses in three years, most likely we are not going to have a standard for NoCs at all. Jantsch replies that with ''standard'' he does not mean something like IEEE standards, but just some common points for NoCs that people normally expect. For instance, whatever bus we have, it is anyway expected to exchange data between IP components. Fanet adds at this point that for sure having some common points is a common interest for everybody. Jantsch remarks that this way also testing will be easier. Isoaho says that at least the verification flow should be standard. Smit concludes that if you have a SoC, you have also a fault tolerant architecture.

Are NoC designers happy with the design flow?

The panelists think that system level design becomes more and more important. Systems should be more parallel, concurrent and modular, but parallel programming has never been a hobby of system designers. So tools are required to help and speed up the design. The panelists also think that C will be the programming language of parallel systems. 99 per cents of programmers can program modular C, so it is what they will program. Leibson mentions that designers are never happy with any of the tools provided. Also Fanet emphasizes the need for tools: we must use tools, because complexity is an issue and the market has to accept the complexity coming with NoCs. Then it is a good idea to train people to use the tools that allow to deal with such complexity.

Is the design time well spent?

Leibson says that it depends on the language we use to program. Zeisel adds that it would be a dream to have Matlab as an input language, so that we could ''push the button'' and get the silicon.

Are the verifying and testing capabilities able to keep the pace of growing systems?

Smit starts by saying that a modular design approach is usually good. Then it is hopefully enough to guarantee good results regardless of the size of the system. Leibson criticizes the industry approach, saying that companies have been stupid with the testability. We would indeed need powerful parallel testing capabilities. Zeisel observes that a key point is to find a smart way to test the complete interconnection. Isoaho concludes by saying that modularity will simplify the testing anyway.

Is there any way to reproduce all possible cases for complex devices?

Fanet says that there is indeed no way to reproduce all the possible cases for complex devices. A NoC is the central place of the system, and you cannot have a full coverage about that. Then we need statistics, which say, how the chip works, and we must extract from that statistics diagnostic methods and test vectors. We have a ''learning curve'' that represents how we know our system.

Do you know any product that has a NoC inside?

Fanet says that he knows some, but the information is confidential. Zeisel adds that he could not specify a product in particular, but anyway they are used in the field of digital audio broadcast.