Multicore NZ

January 23, 2012

Article: “The Memory Wall is ending multicore scaling”

Filed under: High Performance Computing, Integration and Services, Multicore — multicoreblog @ 8:58 am

From this article  at Electronic Design: “Multicore processors dominate today’s computing landscape. Multicore chips are found in platforms as diverse as Apple’s iPad and the Fujitsu K supercomputer. In 2005, as power consumption limited single-core CPU clock rates to about 3 GHz, Intel introduced the two-core Core 2 Duo. Since then, multicore CPUs and graphics processing units (GPUs) have dominated computer architectures. Integrating more cores per socket has become the way that processors can continue to exploit Moore’s law.”

“But a funny thing happened on the way to the multicore forum: processor utilization began to decrease. At first glance, Intel Sandy Bridge servers, with eight 3-GHz cores, and the Nvidia Fermi GPU, featuring 512 floating-point engines, seem to offer linearly improved multicore goodness.”

“But a worrying trend has emerged in supercomputing, which deploys thousands of multicore CPU and GPU sockets for big data applications, foreshadowing severe problems with multicore. As a percentage of peak mega-floating-point operations per second (Mflops), today’s supercomputers are less than 10% utilized. The reason is simple: input-output (I/O) has not kept pace with multicore millions of instructions per second (MIPS).”

Interesting.

 

 

August 24, 2009

News from the Parallel Garage

Filed under: Integration and Services, Multicore, Parallel Programming — multicoreblog @ 10:35 pm

Spent a couple of days with Bill Reichert in his first visit to NZ.

Bill is the MD of Garage.com, a VC firm from Palo Alto. When initially met with him in the early days of 2008, I was probably the first person that came up with a multicore software start up -and from NZ!-, the one that I founded in late 2005: we did a lot of business with Sun Microsystems in Santa Clara and a few in Japan, but at the time it was already “coming to an end” (it was 2008, remember?).

Bill comes to NZ invited by Investment NZ and Jenny Morel to give a presentation at Morgo, so it was great to catch up again in person after several email exchanges along a couple of years.

This time the conversation was more focused towards the future of IT and how Multicore Programming and Parallel Computing was growing in Silicon Valley. I want to note here a couple of things that Bill mentioned today in Arabica Cafe in Wellington: Hypertable and Kerosene and a Match

Hypertable is an open source project based on published best practices and our own experience in solving large-scale data-intensive tasks. Our goal is to bring the benefits of new levels of both performance and scale to many data-driven businesses who are currently limited by previous-generation platforms. Our goal is nothing less than that Hypertable become one of the world’s most massively parallel high performance database platforms.”

“Founded in 2009, Kerosene and a Match is a software developer building tools that leverage the massively parallel, low cost computing power of commodity graphics processors to build ultra-high performance cloud computing platforms.”

“Led by an experienced team of software entrepreneurs and visionaries, KaaM’s goal is to power the future of on-demand computing and applications by harnessing the untapped computing power of in-expensive, off-the-shelf GPU hardware to deliver cloud computing architectures that are 50 or more times more powerful and efficient than current CPU-centric systems.”

Things that happen in Silicon Valley, but perfectly could happen in NZ too. We also discussed how Intel is trying to catch up with Nvidia and how the game industry of NZ can benefit of this movement, but that’s topic for another post

Nicolás Erdödy, Wellington

November 2, 2008

Facebook’s electricity bill

Filed under: Integration and Services — multicoreblog @ 11:48 pm

You always need to read these news with care, avoiding just to repeat what someone heard from somewhere…but even if this one is not correct by an order of magnitude, it is worth to think about it:

Facebook is spending “well over” a million dollars a month in electricity alone and “likely” another $500,000 for bandwidth, as shameless social networkers post billions of photos and other solipsistic pixels. Recently, the company said that users upload two to three terabytes of photos each day. And every second, it serves as many as 300,000 pics the other way.

The word from TechCrunch is that Facebook has set aside $100m to buy 50,000 servers this year and next.

More about this topic in this blog, which also has a lot of good news about Cloud Computing

Debugging Parallel Programs

Filed under: Debugging, Uncategorized — multicoreblog @ 11:25 pm

From an article we found that TotalView Technologies, a provider of interactive analysis and debugging tools for serial and parallel codes, announced that its TotalView® Debugger is playing a critical role in the advancement of parallel computing in the academic world, as a number of higher educational institutions have adopted the debugger to simplify the development of their parallel processing applications.

“Stanford University developers are creating programs that require large-scale, massively parallel computing resources to enable computationally intensive research, and it is critical for us to provide them with the most advanced tools to enhance their efforts,” said Steve Jones, director of Stanford University’s High-Performance Computing Center. “We are constantly striving to keep our High-Performance Computing Center at the forefront of this technology revolution, and partnering with TotalView Technologies, an established leader in the field of interactive analysis and debugging of serial and parallel codes for the most sophisticated software applications, helps us to achieve that.”

As parallel programming continues to become more widely adopted, academic institutions are aggressively expanding their education and research efforts in this area. TotalView Technologies has a long history of working with the academic community, making it easier for software developers of all experience levels to build and maintain complex applications on multi-processor platforms. The TotalView Debugger, a comprehensive source code analysis and debugging tool, dramatically enhances and simplifies the process of debugging parallel, data-intensive, multi-process, multi-threaded or network-distributed applications.

“Many of today’s academic institutions are affected by a shortage of software developers with experience in complex programming methods such as parallelism and concurrency,” said Chris Gottbrath, product manager at TotalView Technologies. “By enabling academic developers to more easily develop new technology applications to solve complicated research problems, TotalView Technologies is helping to alleviate this problem and advancing the research efforts of higher educational institutions worldwide.”

TotalView’s website is full of interesting material, like a white paper on Memory Debugging in Parallel and Distributed Applications, released in September 2008.

October 24, 2008

Support 101

Filed under: Integration and Services — multicoreblog @ 2:54 pm

This article from Linux Watch mentions some simple but big truths about what you should expect from your software provider (this case is about Red Hat, but should apply to everyone).

…Red Hat has the usual “tiered support — tier 1, 2, and 3 — we try to push as much technical expertise as possible into levels 1 and 2. That’s so that we can try to solve our customer problems at the first interaction.”  “97 percent of all problems are resolved by the first line of support.” Typically, Red Hat deals with approximately, “7,000 issues per month.” 

 “24×7, though, is taken as a given. What customers really want to know is can you support the whole environment where Linux is the part of the package.” 

“I think there are two areas that our customers really appreciate in our support. The first is that we are experts on our own technology (shouldn’t be obvious? Would you call an mechanic expert in Honda, to repair your Renault? Of course that he will know about it, but you will look for someone with concentrated expertise, unless you can’t find or…you can’t afford it, or there is no one close to you. This is when a new market of system integrators and multilevel experts come to the party…).

 

“Most of our customers know how to find Linux support online, so when they come to us they’re looking for higher level of expertise and we deliver it.”

“The second is how well we interact with our other partners in the overall IT ecosystem. As Linux adoption is driven deeper into datacenter, we never forget that we’re not working in a vacuum. We need to work with Oracle, Sybase, EMC, and so on. When customers come to us they’re, looking for a single throat to choke for support, and we try to deliver the goods” 

Ok, but isn’t this obvious? Isn’t it the same expectation that you have when you bring your car to the workshop to check your engine? If it has a flat tire, a battery to replace and a radio to repair, if you can, then you would love to have it done for one guy at one place. You know that he will have the connections with the specialists…and charge you for his time searching and integrating the parts for you. You just want your car back and working asap.

Create a free website or blog at WordPress.com.