Multicore NZ

February 4, 2014

Chipping in for multicore champion – let’s get parallel programming

Filed under: High Performance Computing, Models, Multicore, Parallel Programming — multicoreblog @ 6:44 am

Chipping in for multicore champion – let’s get parallel programming.

via Chipping in for multicore champion – let’s get parallel programming.

New Zealand, February 4, 2014



March 10, 2012

Your Chance to Meet the Serial Killers and Explore the Parallel Universe

Filed under: High Performance Computing, Multicore, Parallel Programming — multicoreblog @ 1:31 am

Because they’re coming to Wellington later this month, so you can meet them and explore it with them.

And maybe you should, especially if you’re someone involved in IT, like a CIO, a CTO or a software engineer.

Something’s happened in the chip world. A change so fundamental it’s created opportunities to do everything faster, better and cheaper – across the board.

Serial computing is dead. It’s just that most people don’t know it yet. But it is. Intel knows that. So does Google. And ARM, the UK company whose processors drive 90% of the world’s smartphones. Weta Digital’s in the new loop, along with the scientists pitching to have the massive Square Kilometre Array (SKA) located in Australasia.

For all of them, serial computing is and old technology, killed by parallel processing. Parallel processing (PP) relies on newgen chips, not with a single core but with many, even thousands of them. For most people, though, the technology’s less important than the possibilities. Which are immense, according to PP’s champions. Many of whom are coming to Wellington this month for Multicore World 2012, New Zealand’s first heads-up on this IT revolution.

Speakers at Multicore World (March 27-28) include Intel Software Director, James Reinders and Dr Tim Mattson from Intel Labs; John Goodacre, Director, ARM Processor DivisionWeta Digital’s CTO, Sebastian Sylwan; Dr Mark Moir from Oracle LabsMicrosoft’s Artur Laksberg as well as the CSIRO’s Dr Tim Cornwell and speakers from the Universities of Melbourne and Otago.

RIP single core CPU? Yes, and we should be grateful for that. And, whether you’re a convert or a sceptic, this is a great opportunity to meet the serial killers and explore the parallel universe. Multicore World 2012 has been put together by New Zealand company, Open Parallel and there’s info and registration details on the website

January 23, 2012

Article: “The Memory Wall is ending multicore scaling”

Filed under: High Performance Computing, Integration and Services, Multicore — multicoreblog @ 8:58 am

From this article  at Electronic Design: “Multicore processors dominate today’s computing landscape. Multicore chips are found in platforms as diverse as Apple’s iPad and the Fujitsu K supercomputer. In 2005, as power consumption limited single-core CPU clock rates to about 3 GHz, Intel introduced the two-core Core 2 Duo. Since then, multicore CPUs and graphics processing units (GPUs) have dominated computer architectures. Integrating more cores per socket has become the way that processors can continue to exploit Moore’s law.”

“But a funny thing happened on the way to the multicore forum: processor utilization began to decrease. At first glance, Intel Sandy Bridge servers, with eight 3-GHz cores, and the Nvidia Fermi GPU, featuring 512 floating-point engines, seem to offer linearly improved multicore goodness.”

“But a worrying trend has emerged in supercomputing, which deploys thousands of multicore CPU and GPU sockets for big data applications, foreshadowing severe problems with multicore. As a percentage of peak mega-floating-point operations per second (Mflops), today’s supercomputers are less than 10% utilized. The reason is simple: input-output (I/O) has not kept pace with multicore millions of instructions per second (MIPS).”




March 13, 2011

When will we see applications for multicore systems?

Filed under: High Performance Computing, Multicore, Parallel Programming — multicoreblog @ 10:23 pm

Keshav Pingali, a computer scientist at the University of Texas in Austin, is working with IBM under the auspices of Open Collaborative Research to develop the programming language that will give programmers the tools to write multicore-compatible code

Listen to Keshav’s podcast and read the transcript here

February 9, 2011

SuperComputer to be used for Research in Agriculture

The Centre for Development of Advanced Computing C-DAC with ten centres in major Indian cities,”is now assisting the Indian Council of Agricultural Research (ICAR) in establishing a national agricultural bioinformatics grid”

This initiative, the first of its kind in India, will help scientists enhance agricultural productivity and also address problems like food security. As part of the project, a three-day training-cum-workshop programme on ‘Parallel and High Performance Computing’ began on Monday 7.

The workshop will provide an insight into the different aspects of high performance computing (HPC) with the goal of capability building in solving complex problems in agriculture and biotechnology. Speaking to DNA, Goldi Misra, group coordinator and head, HPC Solutions Group, C-DAC, said the use of HPC would help scientists address the problem of food scarcity at the grass-root level. Full article.

July 15, 2010

Parallelism is not new

Filed under: High Performance Computing, Models, Multicore, Parallel Programming — multicoreblog @ 10:06 am

Peter J. Denning and Jack B. Dennis wrote in their paper “The Resurgence of Parallelism” that

“Parallelism is not new; the realization that it is essential for continued progress in high-performance computing is. Parallelism is not yet a paradigm, but may become so if enough people adopt it as the standard practice and standard way of thinking about computation.”

“The new era of research in parallel processing can benefit from the results of the extensive research in the 1960s and 1970s, avoiding rediscovery of ideas already documented in the literature: shared memory multiprocessing, determinacy, functional programming, and virtual memory.”

Worth reading not only for its excellent presentation and easy read but for the abundant References

June 24, 2010


Filed under: High Performance Computing, Parallel Programming — multicoreblog @ 1:46 am

BOOM is “an effort to explore implementing a Cloud software stack in a data-centric language. BOOM stands for the Berkeley Orders Of Magnitude project, because we seek to enable people to build systems that are OOM bigger than are building today, with OOM less effort than traditional programming methodologies”

Also more here about the paper The Declarative Imperative: Experiences and Conjectures in Distributed Logic

November 6, 2009

Exaflop Computing

Filed under: High Performance Computing, Parallel Programming — multicoreblog @ 9:13 pm

The SC’09 will be next week in Portland, Oregon, USA.

“SC09 has adopted the theme of “Computing for a Changing World,” and will present world renowned speakers on initiatives related to Sustainability, Bio-Computing and the 3D Internet.”

“Over the next 5 years we expect the extended SC community to play an important role in leading the mainstream of computing into an era of parallelism. ”


Some of the abstracts of the keynotes at SC’09 are particularly interesting

The Rise of the 3D Internet – Intel CTO, Justin Rattner

“Forty Exabytes of unique data will be generated worldwide in 2009. This data can help us understand scientific and engineering phenomenon as well as operational trends in business and finance. The best way to understand, navigate and communicate these phenomena is through visualization. In his opening address, Intel CTO Justin Rattner will talk about today’s data deluge and how high performance computing is being used to deliver cutting edge, 3D collaborative visualizations. He will also discuss how the 2D Internet started and draw parallels to the rise of the 3D Internet today. With the help of demonstrations, he will show how rich visualization of scientific data is being used for discovery, collaboration and education.”

A couple of other presentations caught my attention (apart from Al Gore and his view on climate change 🙂

HPC and the Challenge of Achieving a Twenty-Fold Increase in Wind Energy

The Outlook for Energy: Enabled with Supercomputing

“The presentation reviews ExxonMobil’s global energy outlook through 2030. The projections indicate that, at that time, the world’s population will be ~8 billion, roughly 25% higher than today. Along with this population rise will be continuing economic growth. This combination of population and economic growth will increase energy demand by over 50% versus 2000. As demand rises, the pace of technology improvement is likely to accelerate, reflecting the development and deployment of new technologies for obtaining energy–to include finding and producing oil and natural gas. Effective technology solutions to the energy challenges before us will naturally rely on modeling complicated processes and that in turn will lead to a strong need for super computing. Two examples of the supercomputing need in the oil business, seismic approaches for finding petroleum and petroleum reservoir fluid-flow modeling (also known as “reservoir simulation”) will be discussed in the presentation.”


An interesting way to put all these ideas in a less “marketing driven” context is to read the interview with Rick Stevens from Argonne, about “reaching the next milestone in computing history: the exaflops computer.”

I tried to summarise the article, but actually it’s simply better that you go through it and have a glimpse of the future of Supercomputing, which soon (10 years?) won’t be supercomputing but just computing.

So, what’s exaflops?

FLOPS = In computing, FLOPS (or flops or flop/s) is an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer‘s performance, especially in fields of scientific calculations that make heavy use of floating point calculations, similar to the older, simpler, instructions per second. (Wikipedia)

exaFLOPS = 10^18 = 1, 000, 000, 000, 000, 000, 000 operations per second.


That’s why Rattner is so excited about 3D internet and other applications.  In this very good interview, he starts by saying that 3D internet is where HPC “goes consumer.”

The article from HPC wire has a good history of HPC and different players, and finishes quoting Rattner saying:  “If the 3D, immersive experience becomes the dominant metaphor for how people experience the internet of tomorrow, we won’t have to worry about who will build the processors and computers that do HPC. Everyone will want to be a part of that.”


Are you planning to be “part of that”?


Nicolás Erdödy

North Otago, New Zealand


September 15, 2009

HPC is ready for business

Filed under: High Performance Computing — multicoreblog @ 11:04 pm

“HPC is ready for business” is the title of a virtual conference organised by Sun Microsystems.

It will be on September 17, 2009 and has free access.

From the website

“Join us for the only virtual conference dedicated to the best in high performance computing. The online event will give you an opportunity to hear from compute and HPC guru Andy Bechtolsheim and industry experts discussing the trends and issues facing the computational ecosystem.”

“There will also be industry and technology exhibits offering virtual opportunities to discuss technologies, accomplishments, and collaborations in HPC, networking, storage, software, and data management. Come learn how High Performance Computing has truly become business ready.”

Even if you don’t attend, it’s worth have a look at the site. There are interesting resources, including a book: “HPC for dummies” 🙂 (don’t get confused by the title: it is written by Douglas Eadline, Senior HPC Editor for Linux Magazine so it’s serious reading!)

April 30, 2009

Parallel Algorithm and Parallel Software

Filed under: High Performance Computing, Parallel Programming — multicoreblog @ 6:36 am

The Second International Workshop on Parallel Algorithm and Parallel Software (IWPAPS’09), held in conjunction with the 11th IEEE International Conference on High Performance Computing and Communications (HPCC-09) will happen on June 25-27, 2009 in Korea University, Seoul, Korea

The list of topics for the papers

  • Scalable Parallel Algorithm Design
  • Next Generation Parallel Programming Model
  • Next Generation Parallel Programming Languages
  • Parallelizing Compilers for Many-core Processor
  • Parallel Computing Model
  • Parallel Computing Application
  • Performance Evaluation of Parallel Software
  • Parallel Debugging
  • Performance Optimization of Parallel Software
  • Self Adaptive Performance Tuning
  • Multithreaded Parallel Programming
  • Many-core Parallel Programming
  • Software Engineering issues
  • Task Scheduling and Load Balancing
  • Fault Tolerance of Parallel Software
  • GPU-based or FPGA-based Parallel Computing

Older Posts »

Create a free website or blog at