Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

Video | Business Headlines | Internet | Science | Scientific Ethics | Technology | Search

 

Machine Understanding as the future of Computing

Machine Understanding as the future of Computing

Wellington to Host International Conference on Advanced Computing technologies.

A number of the World's leading authorities in the areas of Advanced Computing are converging on Wellington to attend the 5thMulticore World conference – an annual conference that promotes networking and the exchange of ideas in all aspects of High Performance Computing.

One of the themes posed at this year's event is Machine Understanding as the future of Computing. Whilst Machine Learning is well known, Machine Understanding has only recently become popular due to the massive leap in computing capabilities that new computer architectures have allowed. When combined with massive datasets, these systems are able to provide significant insights, for whatever purpose, scientific or commercial.

Machine Understanding encompasses everything from the massive signal processing being pioneered by the Square Kilometre Array (SKA) radio telescope project (of which New Zealand is a full member country), to Deep Learning and Artificial Intelligence as being pioneered by Google, Facebook and others. Simulation and modelling are also enhanced. What is common is the need to process large datasets power efficiently, using clusters of computing elements as a highly distributed system – a multicore approach.

Other topics covered include:

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

• Software for the enterprise in a complex world

• Machine Understanding: the future of the Data Centre

• Big Data and Internet of Things: scalability and industry applications

• Heterogeneous Computing and Exascale Computing

• TOPS: The Operating System for a Data Centre Rack Scale Computer

Speakers:

• Prof Peter Kogge, University of Notre Dame, USA

• Prof Alexander Szalay, Johns Hopkins University, USA

• Dr. Happy Sithole, Centre for HPC, South Africa

• Prof Geoffrey C Fox, Indiana University, USA

• Prof John Gustafson, A*Star, Singapore

• Dr. Herbert Cornelius, Intel, Germany

• Dr. Tshiamo Motshegwa, University of Botswana

• Dr. Chun-Yu Lin, NAR Labs, Taiwan

• Dr. Balazs Gerofi, RIKEN AICS, Japan

• Giuseppe Maxia, VMware, Spain

• Dr. Nicola Gaston, McDiarmid Institute, New Zealand

• Dr. Mark Moir, Oracle, USA-New Zealand

• Aaron Morton, Apache Software Foundation, New Zealand

• drs. Martin Hilgeman, DELL, Netherlands

• Markus Dolensky, ICRAR, Australia

• Bryan Graham, SCION, New Zealand

Conference Background

Since the advent of computers, and later of the internet, the processing of massive amounts of data has been growing. Industry has been increasing computing power for decades, but the trend towards increasing speed of processing has reached the physical barrier. Vendors cannot put more processing power into a traditionally designed chip, without overheating it.

To solve the problem, vendors changed the architecture, building more processors into a single chip, calling them multicore chips. These new chips entered the mainstream market a few years ago, with all vendors currently selling them.

New multicore chips are also more power efficient, and the potential is basically unlimited for the number of cores that you can put on them. The potential processing power is absolutely unheard of, which will not only allow users to do thing faster, but also add more, and new, conditions to the current problems. Now it is possible to imagine applications that have not been possible before.

However, this new and exciting scenario comes with a challenge.

Since the inception of computers, software has been written with a single central processing unit (CPU) in a chip. To exploit the potential of multicore chips, software needs to be written thinking in parallel. Parallel programming is not a new concept, but it is more difficult to write. It is estimated than less than 10% of all the software programmers worldwide are able to deal with parallel programming.

In the next 10-15 years, there will be huge opportunities to either deal with all the legacy code written from decades of sequential programming, or to create new software that will take full advantage of thousands of cores in a chip, plus all the range of services, solutions and systems integration in between.

This is an ideal ground for the fertile mind of the technologists, software communities and researchers within New Zealand, Australia and our region. It is mainstream but it is a niche new technology. Open Parallel, a New Zealand based company specialising in Software for Multicore and Parallel Computing has been working in multicore for years. To increase awareness about multicore and to present the ecosystem that New Zealand and Australia already have in place to unveil the potential of multicore chips, Open Parallel is the organisation behind Multicore World – a global conference about Multicore Technologies (software and hardware).

The main goal of the conference is to provide IT decision makers being senior managers as well as software community leaders with the knowledge and the connections they need to make valid business and technology decisions in terms of their multicore software and hardware requirements over the coming years.

5th Multicore World conference is 15,16 &17 February 2016 at Shed 6, Queens Wharf, Wellington's Waterfront.

ENDS

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Business Headlines | Sci-Tech Headlines

 
 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.