How Multicore Helps Overcome The ‘Tax’ Of Computing Delays
Conference to hear how multicore helps overcome the ‘tax’ of computing delays
Press Release - 24 December 2012
Multicore World 2013
Exactly how multicore computers and parallel programming can reduce the time ‘tax’ imposed by a central processing unit is to be addressed by Paul McKenney at next February’s Multicore World 2013 conference.
This includes computational workloads that simulate mechanical, electrical and fluid systems – for example car crash tests in computers rather than with real cars.
Other examples where multicore computing is increasingly used, and an area that people “love to hate” says McKenney, include computational finance. His talk will also examine the increasing use of software in all areas of design, manufacturing and commerce where the use of multicore computing will see many more applications that are both real-time and intensively computational.
IBM’s McKenney’s talk title is ‘Bare-Metal Multicore Performance in General-Purpose operating systems’, at the Wellington Town Hall based event next February 19 and 20.
McKenney says for people running aggressive real-time applications including laser welders, defect check/reject systems and process control any delay or lag in response is effectively a ‘tax’, and slows down the whole performance.
“Multicore applications improve latency and response time in these situations,” he says.
“For people running iterative computational workloads with short iteration times, multicore operation decreases the ‘operating jitter’ that would otherwise sap performance.”
McKenney is one of a number of acclaimed international multicore experts who would fill overseas conference events.
Multicore World 2013 founder Nicolas Erdody says the second annual event is an opportunity for New Zealand to establish a niche in today and tomorrow’s future of computing, and that the IT community, business and government should attend to ensure that they don’t get left behind as the technology becomes increasingly mainstream.
Nicolas Erdody, Director Open Parallel. Nicolas.firstname.lastname@example.org (027 521 4020)
Karen Bender, Business Growth Manager, Grow Wellington. Karen.email@example.com (021 628 144)
The ability of computers to process massive amounts of data has been growing ever since they were invented. As computer power has increased, the speed of processing has reached a physical barrier, and more processing power cannot be put onto a chip without overheating.
The problem has been solved by putting more processors onto a single chip, creating multicore chips. These multicore chips entered the mainstream market a few years ago, and all vendors currently sell them. They are now standard kit in all laptops, desktops and smartphones.
Multicore chips are also more power efficient, and the number of cores able to be added is theoretically virtually unlimited.
Previously impossible computational tasks can now be achieved. And processes which previously took, days or even weeks to perform can now be done swiftly.
But while this new processing power enables computers to do things faster, it also adds new challenges.
Before Multicore computer software was written for a single central processing unit (CPU) in a chip. To exploit the potential of multicore chips, software now needs to be written while thinking in parallel.
But parallel programming is different than traditional programming, and so far few programmers have experience of it.
Multicore is a mainstream but (as yet) niche new technology.
In the next 10-15 years, there will be huge opportunities to translate sequential programming (‘traditional’) legacy code, and to create new software that takes full advantage of thousands of cores in the next generation of chips.
Around the world parallel computing is currently used to process vast quantities of data produced by the internet and the "big data" originating out of social networks and millions of intelligent data recording devices attached to the internet..
Here in NZ it is also used in the biggest CGI rendering facility in the world at Wellington's Weta Digital.
And soon it will be a key component of the information processing required to handle the data produced by the Square Kilometer Array radio - telescope – a global scientific project that New Zealand is a part of.
In addition, there is a wide range of services, solutions and systems integration challenges to connect the two world's together.