Multicore NZ

March 26, 2009

Linus Torvalds, Patterson and different views (or different worlds?)

Filed under: Multicore, Parallel Programming — multicoreblog @ 8:25 am

I am working in the “Open Source Learning Lab” project which will be the new

In this context, looking for content for the Topics of Eduforge, I came through a nice presentation about the History of Linux by Linus Torvalds . It is an 85 minutes  story, told in a very friendly way with  questions from the audience, about a week or two after September 11, 2001, sponsored by the Computer History Museum. It is (after the overview of the Museum) the most viewed video of the 60 excellent ones that are stored there.  It is enjoyable listening to a “young” Torvalds, speaking about the 10 years of Linux! I recommend it.

But which is the link of the video with this blog?

One of the last questions at the end of the video, was about SMP (Symmetric Multiprocessing, the use of multiple CPUs) . Torvalds answers something like “I don’t see this coming in at least five years, but definitely it is a very interesting place to be, and it will involve NUMA (Non-Uniform Memory Access, a computer memory design used in multiprocessors)”.

That was enough to catch my attention, and started to search if the interest of Linus continued after those five years…What I found was a sort of soap opera, pretty much a conversation in different languages…

Someone posted this in Real World Technologies

“…Quoted from the white paper “The Landscape of Parallel Computing Research: A View from Berkeley”. This paper just gave Berkeley $10M over 5 years from MS and Intel to research the future of parallel computing. Happy reading…”

The first answer with the title “disappointing” said

Name: Linus Torvalds ( 2/14/08

Ugh. They seem to make essentially all of their arguments based on their “dwarfs” (shouldn’t that be “vertically challenged algorithm”?).

And their dwarfs in turn seem entirely selected to then support the end result they wanted. Can anybody say “circular argument” ten times fast?

Apart from the obvious graphics thing, none of their loads seem at all relevant to “general purpose computing”, they are all essentially about scientific computing.

And we already pretty much know the solution to scientific computing: throw lots of cheap hardware on it (where “cheap” is then defined by what is mass-produced for other reasons).

Designing future hardware around the needs of scientific computing seems ass-backwards. It’s putting the cart in front of the horse.


Which after a couple of threads -and good comments from others, received this answer (abridged)

Name: David Patterson ( 2/15/08

Since we spent almost 2 years of our lives working on the this report, I’d add my perspective to this discussion.
* The goal is to raise the level of abstraction to allow people the space that we’ll need to be able to make the manycore bet work, rather than to be hamstrung by 15-year old legacy code written in 30-year old progamming languages.

* Based on our 2 year investigation, we make the provactive claim that your programming language, compiler, libraries, computer architecture … better be able to handle these design patterns well, because they will be important in the upcoming decade in many apps. There are likely more design patterns that these 13, but they include, for example
– Finite State Machines
– Branch and Bound
– Graph Algorithms
which aren’t in most people’s lists of scientific computing problems.

Our bet is that the best applications, the best programming languages, the best libraries,… have not yet been written.

The challenge for this next generation of software to be correct, efficient, and scale with the increasing number of processors, without overburdening programmers. If we as field can succeed at this amazingly difficult challenge, the future looks good. If not, then performance increases we have relied upon for decades will come to an abrupt halt, likely diminishing the future of the IT industry.

Dave Patteron, UC Berkeley

Which received this answer from Torvalds (abridged)

David Patterson ( on 2/15/08 wrote:
>* The goal is to raise the level of abstraction to allow people the space that we’ll need to be able to make the manycore bet work, rather than to be hamstrung by 15-year old legacy code written in 30-year old progamming languages.

Well, you basically start off by just assuming it can work, and that nothing else can.

That’s a big assumption. It’s by no means something you should take for granted. It’s a wish that hasn’t come to fruition so far, and quite frankly, I don’t think people are really any closer to a solution today than they were two decades ago.

The fact that you can find application domains where it does work isn’t new.

We’ve had our CM-5’s, we’ve had our Occam programs, and there’s no question they worked. The question is whether they work for general-purpose computing, and that one is still unanswered, I think.

The problem isn’t CPU time, it’s memory and IO footprint.

And I don’t think that is unheard of elsewhere. The core algorithm could well be parallelizable, but the problem isn’t the linear CPU speed, it’s the things outside the CPU.

>Our bet is that the best applications, the best programming languages, the best libraries,… have not yet been written.

If we are looking at a 100+ core future, I certainly agree.

>If we as field can succeed at this amazingly difficult challenge, the future looks good. If not, then performance increases we have relied upon for decades will come to an abrupt halt, likely dimishing the future of the IT industry.

Here’s my personal prediction, and hey, it’s just that: a guess:
(a) we’ll continue to be largely dominated by linear  issues in a majority of loads.
(b) this may well mean that the future of GP computing ends up being about small, and low power, and being absolutely everywhere (== really dirty cheap).

IOW, the expectation of exponential performance scaling may simply not be the thing that we even want. Yeah, we’ll get it for those nice parallel loads, but rather than expect everything to get there, maybe we should just look forward to improving IT in other directions than pure performance.

If the choice becomes one of “parallel but fast machines” and “really small and really cheap and really low power and ‘fast enough’ ones with just a coule of cores”, maybe people will really pick the latter.

Especially if it proves that the parallel problem really isn’t practically solvable for a lot of things that people want to do.

Pessimistic? It depends on what you look forward to.


My comment?

If the funding went to a university somewhere else and not Berkeley, probably Torvalds  wouldn’t comment.

If someone else was making the observation, Patterson probably wouldn’t even notice it and answer.

Is this a personalities game, or there is some potential here? Funding goes to the most unusual places with the most unrealistic expectations. As Torvalds said in his speech in 2001, you never start a project in programming knowing exactly how will be the final result…

And I’m not saying that he is wrong in his observations, but it sounds that both are more focused on the “right/wrong” position and defending their pride.

Patterson also said in 2006 that “…maybe we should be putting a big RAMP box out there on the Internet for the open source community, to let them play with a highly scalable processor and see what ideas they can come up with. I guess that’s the right question: What can we do to engage the open source community to get innovative people… The parallel solutions may not come from academia or from research labs as they did in the past”.

Are these two worlds coming closer?

Nicolas Erdody




  1. Both Torvalds and Patterson show some ignorance about the world of parallel computing. The notion that no progress has been made in the last 20 years is nonsense. I refer in particular the work at MIT on Cilk (, which provides a high-level abstraction while offering provable guarantees of performance. Cilk received the 10-year retrospective award for most influential paper from the PLDI conference and was mentioned in well over half the talks at the recent PPoPP conference.

    Even though its work-stealing scheduler provides the basis of most modern multicore-programming environments — Cilk++, OpenMP 3.0 tasking, Intel’s TBB, Sun’s Fortress, and Microsofts TPL — the original Berkeley white paper completely fails to mention Cilk (although Cilk is mentioned briefly in the revision). This technology was recently spun out of MIT to Cilk Arts, Inc. Cilk++ (for C++) is now available in open source and contains many improvements over the C-based MIT technology. For example, the race-detection technology for Cilk and Cilk++ provides the only provably good strategy to date for ensuring the absence of race conditions.

    Although the biggest challenge for multicore software remains education (including combatting ignorance), numerous applications have been programmed using Cilk/Cilk++ by programmers with ordinary skill, and it is being taught at dozens of universities. Cilk has been my research for over 15 years, and although I regret having to toot my own horn, I ask that you and readers of your blog check out this technology for yourselves at I also encourage you to read my short e-book, “How to Survive the Multicore Revolution (or at Least Survive the Hype)” at

    Comment by Charles E. Leiserson — March 28, 2009 @ 3:48 am

  2. Hello Charles,

    I always believed that ignorance is a shared responsibility between the ignorant and the one who “knows”. IMHO it is the case of parallel programming (and I’m not “blaming” you in this case :-).

    I’m a “recent arrival” to the world of PP, and day after day I am more surprised that how people has mixed impressions about it. It is just about general information, and watching what it’s happening around, to realise that PP is part of the solution, not of the problem.

    We met in Santa Clara in Multicore 2008, and also Ilya lectured me extensively about Cilk… Probably once my friend Dr Zhiyi Huang is back in Otago University, will learn more about the most recent developments of Cilk.

    I will appreciate if you comment more about Cilk++ as open source: how has been received? Who has been using it? What for? Only in research environments or HPC?

    Nicolas Erdody

    Comment by multicoreblog — March 28, 2009 @ 5:22 am

  3. I was listening to a Swedish diplomat “permanent resident” on a Taiwanese newscast. If we cannot provide a more seamless integration of business and university, we probably will not foster a viable software model. The diplomat spoke of environmental factors for development of new technology, and a safety net for the entrepreneur who risks failure in the encounter with challenging accomplishment.

    Isn’t this a major issue? Can we agree to allow the universities to communicate freely with the demands of the marketplace? The billions of dollars poured into Chinese software development by MS dwarf the petty Berkeley funding, though Northern California birthed much of computing, AND STILL has the brainpower to go forward, if we allow it to proceed. Look at the gamesters development in North Carolina and Canada. These folks are doing amazing work, and the old guard better beware, lest we lose what is left of our “edge” to petty squabbling.

    Leadership comes through vision such as Jobs or Gates (and in some cases through ruthlessness), not through individuals who squabble for their piece of scrap. I think of the words of Peter Marshall: “We pray for this land. We need Thy help in this time of testing and uncertainty, when men who could fight together on the field of battle seem strangely unable to work together around conference tables for peace.” Unfortunately, a unified effort may not (has not) come in time to avert economic and national disaster. We worry about saving some smelt when China chokes, murders, pollutes, represses, and builds grand, and brand new cities beneath its foul air.

    I am concerned that Mr. Gates and Mr. Buffett may have lost sight of the needs of our nation under a misguided, over-emotional, noblesse oblige. The extraordinary sacrifices of common Americans to provide an environment for their business is lost in the short-sell, short-shrift, immediacy of lack of business ethics. Nixonian/Clinton engagement is met with the knowing smile of those who are suckering the American power. And we play, when we should pray, and repent.

    Of course, the cogniscenti might demur. But we may need to crack through the stubborn egg(heads) to realize a new model. I like scrambled eggs–they are a good way to start a new day. And in the new day, with the new pay (and new paymaster), some of the cogniscent might jump on board, when they realize that their ship is sinking. (Or they will go down, squabbling still about the arrangement of deck chairs.)

    Comment by Steven Hines — June 12, 2011 @ 1:24 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: