Multicore NZ

February 4, 2014

Chipping in for multicore champion – let’s get parallel programming

Filed under: High Performance Computing, Models, Multicore, Parallel Programming — multicoreblog @ 6:44 am

Chipping in for multicore champion – let’s get parallel programming.

via Chipping in for multicore champion – let’s get parallel programming.

New Zealand, February 4, 2014

 

July 15, 2010

Parallelism is not new

Filed under: High Performance Computing, Models, Multicore, Parallel Programming — multicoreblog @ 10:06 am

Peter J. Denning and Jack B. Dennis wrote in their paper “The Resurgence of Parallelism” that

“Parallelism is not new; the realization that it is essential for continued progress in high-performance computing is. Parallelism is not yet a paradigm, but may become so if enough people adopt it as the standard practice and standard way of thinking about computation.”

“The new era of research in parallel processing can benefit from the results of the extensive research in the 1960s and 1970s, avoiding rediscovery of ideas already documented in the literature: shared memory multiprocessing, determinacy, functional programming, and virtual memory.”

Worth reading not only for its excellent presentation and easy read but for the abundant References

September 10, 2009

The business model for Parallel Programming

Filed under: Models, Multicore, Parallel Programming — multicoreblog @ 12:32 am

A recent article at Infoworld (August 26) has a classic title

“Parallelism needs killer application for mass adoption”

The writer reports from Hot Chips 21 , a symposium on high performance chips held in August at Stanford University and sponsored by IEEE.

However, the article presents a number of general issues in an easy to read way so it is worth to reproduce most of the article below (I added some links to explain names. Also highlighted concepts that inspired the title of this post):

“The addition of multiple cores to microprocessors has created a significant opportunity for parallel programming, but a killer application is needed to push the concept into the mainstream, researchers said during a panel discussion at the Hot Chips conference.”

“Most software today is still being written for sequential execution and programming models need to change to take advantage of faster hardware and an increasing number of cores on chips, panelists said. Programmers need to write code in a way that enables tasks to be divided up and executed simultaneously across multiple cores and threads.”

“A lot of focus and money have gone into building fast machines and better programming languages, said David Patterson, a computer science professor at the University of California, Berkeley, at the conference in Stanford on Monday.” “Comparatively little attention has been paid to writing desktop programs in parallel, but applications such as gaming and music could change that. Users of such programs demand the best real-time performance, so programmers may have to adopt models that break up tasks over multiple threads and cores.”

“For example, novel forms of parallelism could improve the quality of music played back on PCs and smartphones, Patterson said. Code that does a better job of separating channels and instruments could ultimately generate sound through parallel interaction.”

“The University of California, Berkeley, has a parallel computing lab where researchers are trying to understand how applications are used, which could help optimize code for handheld devices. One project aims to bring desktop-quality browsing to handheld devices by optimizing code based on specific tasks like rendering and parsing of pages. Another project involves optimizing code for faster retrieval of health information. The lab is funded primarily by Intel and Microsoft.”

“Berkeley researchers are trying to bring in parallelism by replacing bits of code originally written using scripting languages like Python and Ruby on Rails with new low-level C code. The new code specifically focuses on particular tasks like analyzing a specific voice pattern in a speech recognition application, Patterson said in an interview Wednesday. The code is written using OpenMP or MPI, application programming interfaces designed to write machine-level parallel applications.”

“Experts are need to write this highly specialized parallel code, Patterson said. It reduces development time for programmers who would otherwise use Python and Ruby on Rails, which make application development easier, but do not focus on parallelism, Patterson said in the interview. The lab has shown specific task execution jump by a factor of 20 with the low-level machine code.”

“The concept of parallelism is not new and has been mostly the domain of high-performance computing. Low levels of parallelism have always been possible, but programmers have faced a daunting task with a lack of software tools and ever-changing hardware environments.”

“Threads have to synchronize correctly,” said Christos Kozyrakis, a professor of electrical engineering and computer science at Stanford University, during a presentation prior to the panel discussion. Code needs to be written in a form that behaves predictably and scales as more cores become available.”

“Compilers also need to be made smarter and be perceptive enough to break up threads on time so that outputs are received in a correct sequence, Kozyrakis said. Faulty attempts to build parallelism into code could create buggy software if specific calculations are not executed in a certain order. That is a problem commonly referred to as race conditions. Coders may also need to learn how to use multiple programming tools to achieve finer levels of parallelism, panelists said.”

“There’s no lazy-boy approach to programming,” Patterson said at the conference.

“Memory and network latency have created bottlenecks in data throughput, which could negate the performance achieved by parallel task execution. There are also different programming tools for different architectures, which make it difficult to take advantage of all the hardware available.”

“Many parallelism tools available today are designed to harness the parallel processing capabilities of CPUs and graphics processing units to improve system performance. Apple, Intel, Nvidia, and Advanced Micro Devices are among the companies promoting OpenCL, a parallel programming environment that will be supported in Apple’s upcoming Mac OS X 10.6 operating system, also called Snow Leopard , which is due for release Friday. OpenCL competes with Microsoft, which is promoting its proprietary DirectX parallel programming tools, and Nvidia, which offers the CUDA framework.”

OpenCL includes a C-like programming language with APIs to manage distribution of kernels across hardware such as processor cores and other resources. OpenCL could help Mac OS decode video faster by distributing pixel processing across multiple CPU and graphics processing units in a system.”

“All the existing tools are geared toward different software environments and take advantage of different resources, Patterson said. OpenCL, for example, is geared more toward execution of tasks on GPUs. Proprietary models like DirectX are hard to deploy across heterogeneous computing environments, while some models like OpenCL adapt to only specific environments that rely on GPUs.”

“I don’t think [OpenCL] is going to be embraced across all architectures.” Patterson said. “We need in the meantime to be trying other things,” like trying to improve on the programming models with commonly used development tools, such as Ruby on Rails, he said.”

(…)
“Kozyrakis said Stanford has established a lab that aims to “make parallel application development practical for [the] masses,” by 2012. The researchers are working with companies like Intel, AMD, IBM, Sun, Hewlett-Packard, and Nvidia.”

“An immediate task test for developers could be to try to convert existing legacy code in parallel for execution on modern chips, Berkeley’s Patterson said. A couple of companies are offering automatic parallelization, but rewriting and compiling the legacy code originally written for sequential execution could be a big challenge.”

“There’s money to be made in those areas,” Patterson said.


It sounds like telling someone in 1993 that it is money to be made by learning/trying/using Linux…Even if you start today, there will be a huge demand for your skills as an individual or a start up and by working in this space, you’ll be better positioned to take opportunities and even build specific platforms.

The real killer app is not on doing things faster or better with the new hardware that today are already possible. It will come by the use of the new hardware (manycore and more) for applications that we are not even imagining that could be possible today…

And the Open Source model will be prevalent, OpenCL and DirectX are still “competing” in the “old scenario”

Is interesting to watch the evolution, but wouldn’t be more fun to be part of it?

Nicolás Erdödy

Oamaru, NZ

August 20, 2009

Intel keeps shopping: now it’s Rapid Mind

Filed under: Models, Multicore, Parallel Programming — multicoreblog @ 12:29 am

A friend here in Wellington (who at the time was the employee #1 of Rapid Mind :-)) sent me the news: Intel just bought the Canadian company.

At their official blog they say “we are now part of Intel Corporation”…dated 19 of August (which is “today” in US time: in NZ we are “ahead” :-)

James Reinders, from Intel also blogs about it: “The Rapidmind founders, engineering team and marketing team have joined Intel this week. Intel has acquired the Rapidmind products and technology.”


Is this a trend from Intel trying to consolidate its position as a leader in parallelisation?

Or are they just building a faster knowledge base for their hardware?

Both are very simplistic questions for a fast post: industry consolidation is common when new trends emerge.

But just see a couple of posts below in this blog and will see how Intel has been “shopping”.

First Wind River, then Cilk, now Rapid Mind…

What was just an update from what was happening -the first post re Wind River- and become an interesting “coincidence” with Cilk; now is more a fact / strategy that they are “out there” in buying mode.

Cisco did a similar strategy time ago when bought several high tech start ups as a way to accelerate its access to innovation (and judging for its position in the market, it worked well…)

Will be interesting to discuss these and other questions with James at the miniconference about Open Source, Multicore and Parallel Computing in Wellington, New Zealand in January 2010

What’s fascinating is how the trend is slowly consolidating itself: while I’m writing this, the ParLab summer course in Berkeley is happening simultaneously online.

However this trend is not a wave, looks more like a tide…and like many tides, not everyone is noticing it.


July 28, 2009

Teaching Parallel Programming in High Schools

Filed under: Models, Parallel Programming — multicoreblog @ 10:44 pm

Are high school whiz kids ready to “think parallel”? is the title of this article at Intel

It is a pilot experience, with 16 students and 6 teachers participating.

The idea is simple: teach to think parallel before students become “contaminated” with the current paradigm of sequential programming.

In a previous life, I’ve been a Math lecturer and teacher and education researcher: there is nothing that a keen student won’t be able to learn, provided that knowledge is presented with enthusiasm. I remember experimenting with Calculus and Superior Algebra to 13-14 years olds…

Back to the point, this is an Intel initiative, so it should be seen in the broader context of promoting its multicore chips.

What needs to be highlighted is the huge change that can be achieved in every computational process, thanks to the parallelism of the new chips. If in the massive shift to this technology, Intel’s shareholders get more dividends, well, they had the initiative on developing the technology (and Sun and AMD and Plurality and Tilera and…)

It will be a long journey, to extend this learning from 16 students to millions of programmers worldwide, but everything needs to start somewhere: and this is the “right” start. You won’t easily convince current programmers (and its CIOs) to shift billions of legacy code to parallel until the situation is burning (it could be similar to the process of digitalising paper documents when the IT revolution begun: I’m still impressed about that, and everytime that I think on the Health System and how many prescriptions are handwritten worldwide reminds me of how long we have come in short time, but how much is still to be done)

And me? I’m back to school!

Just received this from Berkeley!

“Your registration has been received for ONLINE attendance at the 2009 Par Lab Boot Camp – Short Course on Parallel Computing.  The course will run 9am-6pm August 19-21.”

Final thought: what are YOU doing to start to think parallel?

Nicolas Erdody, Wellington, New Zealand

Linux ConfAu in Wellington, January 2010

Filed under: Models, Multicore, Parallel Programming — multicoreblog @ 10:30 am

Between 18 to 23 of January 2010, one of the most important conferences in Linux will be in Wellington, the capital of New Zealand.

http://www.lca2010.org.nz

July 8, 2009

Grand Central = Grand Challenge for the iPhone?

Filed under: Models, Multicore — multicoreblog @ 9:49 am

Apple is releasing Snow Leopard later this year. There is a good post at Apple blog about the context of what will change for the end user of a Mac. It is worth read it until the last paragraph, which says

The Future

“Things get a little more interesting when you consider that future iPhones will likely have multi-core CPU’s and that Intel is advising developers to prepare for a future with “thousands of cores” available. Add in something like Larrabee, which presents dozens of additional cores to the system, and the wisdom of a systemwide approach to managing threads becomes apparent.”

All the developers that are currently working on applications for the iPhone, are aware of this future?

For example, what about the “iFund”?

-yes, it exists.

It is a venture capital fund managed by KPCB (a major VC firm in Silicon Valley, but its history is extra this blog: well before they funded Google they were creating Tandem and Genentech)

Their site says:

iFund™

“KPCB’s iFund™ is a $100M investment initiative that will fund market-changing ideas and products that extend the revolutionary new iPhone and iPod touch platform. The iFund™ is agnostic to size and stage of investment and will invest in companies building applications, services and components. Focus areas include location based services, social networking, mCommerce (including advertising and payments), communication, and entertainment. The iFund™ will back innovators pursuing transformative, high-impact ideas with an eye towards building independent durable companies atop the iPhone / iPod touch platform.”

“A revolutionary new platform is a rare and prized opportunity for entrepreneurs, and that’s exactly what Apple has created with iPhone and iPod touch,” said John Doerr, Partner at Kleiner Perkins Caufield & Byers. “We think several significant new companies will emerge as this new platform evolves, and the iFund™ will empower them to realize their full potential.”

“Developers are already bursting with ideas for the iPhone and iPod touch, and now they have the chance to turn those ideas into great companies with the help of world-class venture capitalists,” said Steve Jobs, Apple’s CEO. “We can’t wait to start working with Kleiner Perkins and the companies they fund through this new initiative.”

It will be interesting to see how these new ventures will adjust/adapt/take advantage (or not) of the multicore iPhone platform…

Nicolas Erdody

June 9, 2009

Intel buys Wind River

Filed under: Models, Multicore — multicoreblog @ 5:46 am

Wind River, a company that has been around for a couple of decades, has been bought by Intel for $884 million. Not bad, given that Linux is their major product.

From an article at Computerworld there are other comments that are of our interest,

“Beyond mobile and embedded processors, the chip maker could also extend Wind River’s technology to high-end multicore processors like Intel’s Larrabee graphics chip, McCarron said. Wind River and Intel already collaborate to deliver software tools for multicore systems.”

“A recent programming challenge has been writing software for simultaneous execution across multiple cores and Wind River’s products can translate program code to loop tasks across multiple processors.”

“Intel is working on chips that enable symmetric multiprocessing, I wonder if there is a connection there,” McCarron said. “Given that Larrabee is running its own internal OS, there may be a tie-in related to the OS.”

“The Larrabee graphics processor uses an array of X86 cores with vector processors, and a chip-level operating system is needed to coordinate software execution across those cores. Right now the chip uses BSD (Berkeley Software Distribution), a flavor of the Unix OS, internally to coordinate processing, and Wind River’s products could be used to optimize software — like games — to work effectively.”

“It greatly eases the load on people writing the code,” McCarron.

“Only time will tell if Intel winds up paying too much to acquire Wind River, but the planned acquisition is an exercise by Intel to provide the right mix of products to push chips into new devices and markets, analysts said. Intel’s consumers want to get higher software performance from the chip.”

“The planned acquisition is not about Intel taking aim at its competitors, in McCarron’s view. “This is not targeting any company. It is more about controlling your destiny and having the right ingredients,” McCarron said.

Dean McCarron, principal analyst with Mercury Research.

My opinion is that Intel keeps pushing for their multicore chips and this acquisition just keeps the trend of demonstrate its easy of use.

Intel buys Wind River, Oracle bought Sun: someone could read this as a consolidation in the software industry around major firms. Other way to see it would be that the big players need to prepare themselves for innovations that are challenging their existence. Cisco grew aggressively by acquisitions years ago, but it was a way to be ahead of competitors by buying innovative companies. Sun and maybe Wind River wouldn’t survive very easily through the following few years, but their technology has been long time orientated towards the future that the major players are now looking for.

As usual, more consolidation just opens new opportunities in the fringe. Not only in technology, but in how it is managed. The OSS model certainly will have a strong part on it.

February 12, 2009

Farming servers in NZ

Filed under: Models, Multicore — multicoreblog @ 8:12 am

Instead of “server farms”, it’s time to use the title accordingly in a country like NZ where farms are central to the economy.

But this is not the substance of this post. In fact, this is not a post with a new idea, it is a reflection of how ideas can be misused and misunderstood. And about how headlines can be confusing…

Last Monday I had the opportunity to meet with Andy Hopper in Christchurch. We had a friendly and very interesting conversation for a couple of hours, around many topics. This encounter came as consequence of his presentation in Wellington in a conference (see post).

Decided to make the most of our meeting, I did my homework about Andy: found several interesting sources but the best one is Andy himself: there are a number of podcasts here made by Alan Macfarlane.

In the plane back from Chch, processing all the good ideas that we discussed, I came through the latest edition of Computerworld NZ, with the following content:

“British computer pioneer Andy Hopper, the final keynote speaker at the Australasian Computer Science Week conference, came under fire from Privacy Commissioner Marie Shroff for his frank predictions of a society where sensors monitor everyone’s energy consumption, and make graphically-presented data available on a public website. Hopper raises such strategies as one possible way of using technology to combat climate change and other threats to the environment. Schroff accused him of seeing such surveillance technology as “an ethics-free or moral-free zone”. Hopper was quick to deny this, saying his responsibility was to communicate future scenarios and open them up for discussion”.

I witnessed the Q&A session, together with a couple of hundred people: Ms. Schroff made a single question after the hour long presentation, and between at least 8 – 10 other questions, more related to realistic circumstances derived from the conference.

But this was enough for the press to title


Computer pioneer takes privacy broadside

Privacy commissioner takes issue with sensor-monitoring scenario

(and it was in the front page…)

So, all the good ideas presented are reduced at a title closer to a gossip magazine…

I found a similar example, but this one is slightly positive:

“The world’s computing power should be moved from desktop computers and company servers to remote outposts where renewable energy such as wind and solar power is abundant, according to a Cambridge University computer expert” (Andy Hopper). This appeared in The Guardian with the prudent title

Wind power urged for computers

Another site quoted the same notice, but was more creative with the title:

Will wind farms wed server farms?

Which is catchy without doubt…but still to the point…

So, why NZ press, even the technical one, is so focused in micro details that are magnified for the sake of selling news that no one will read seriously? Anyway, this is not a very creative post either, so I should concentrate in the title of my next post. Probably if I want more readers, I should title something like “Britney Spears doesn’t use Multicore”…

January 29, 2009

NZ is not that far…

Filed under: Models, Multicore — multicoreblog @ 5:25 am

Indeed.

In less than a week, I met a number of people involved in Parallel Computing, the Cloud, the Grid and Open Source and quoted in the Multicore NZ report. All of them here, at the CBD of Wellington, NZ

Yes, it was a conference, the Australasian Computer Science Week, but not all of them were here for that.

Wednesday 21, I enjoyed a glass of wine with Ian Foster, while we discussed how multicore will become mainstream (possibly through applications in media or games) and how make the untapped talent existing in NZ (his words) better known even in NZ. Ian mentioned as an example the event Running Hot of young scientists in NZ. It also happened here, in Wellington…

Other topic of our conversation was around the blog of the Computing Community Consortium: the tags that are more visited are multicore and multicore parallel. We discussed all four posts, from Dan Reed (Microsoft), Marc Snir (UIUC), Andrew Chien (Intel) and Dave Patterson (Berkeley). Of all four of them, Ian knows three except Patterson (but I met him long time ago), so we just shared ideas about how a network like that could be orientated towards a concept like MulticoreNZ

The blog is worth reading, by the way…

On Friday 23, I attended Andy Hopper’s presentation (Computing for the future of the planet) at the conference, and we had an interesting ideas exchange in public about how multicore could (or not) solve some issues…I found the link to the presentation given in Google a few months ago, it is pretty similar to the Wellington one and shows a very compelling image about how NZ is one of the best places in the world to place a server farm…

Andy is the HOD of the Computer Laboratory of the University of Cambridge.

I attended the presentation with Ken Hawick, from the Parallel Computing Centre of Massey University (Auckland) (also quoted in MulticoreNZ)

Finally, on Monday I spent many hours chatting and listening to presentations given by Simon Phipps (and having lattes and locally brewed beers, wouldn’t be possible to add all that time in normal conditions…), Chief Open Source Officer of Sun Microsystems…we discussed many topics, including the Cloud and Virtual Worlds, not to mention his vision of Multithreading that will give better solutions than parallel programming…His essays on software freedom are worth reading…

My point is about how NZ, so far but at the same time close for specific talent that have interest in similar areas, could take advantage of the willingness of these talented people to collaborate with NZ scientists.

Our distance becomes an easy way to distinguish ourselves, people is keen to be positive about what we do here. Just the links of this post give a number of clues for opportunities and avenues to pursue, with or without the collaboration of the people mentioned.

Which would be the model for it? Who will be driving it? When will start? How will be founded?

Sometimes the questions are more important than the answers…

Nicolas Erdody

Older Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.