
"In the simulation and modeling and machine learning sectors of the broader high performance computing sector, perhaps one day there will be a unified field, like quantum mechanics and relativity, and perhaps there will be a single programming environment that can span it all," I said. We finished off the debate with me, the other co-editor at The Next Platform, arguing that a single unified HPC and AI development and runtime environment might be less desirable than we might think at first blush. But the long run can be a very long time.

We would add that it will be tough to get agreement when there are three major suppliers of compute engines in the data centre – Intel, AMD, and Nvidia – and agree that self-serving standards will not survive in the long run, as Olds pointed out.
#TV CHANNEL PLAYOUT SOFTWARE FREE REDDIT CODE#
We can unify HPC and AI software environments, just not at the source code level.

A single HPC-AI software environment is less desirable than you might think.APIs are sources of competitive advantage for many companies and, as such, not something that those suppliers should want to completely standardize – particularly when that standard is being driven by the largest and most influential supplier in the industry." "Why? Because this is a world of human beings who are working in the interests of themselves and their organizations. "There is no way in hell this will happen," Olds argued. Parallelism can be achieved by pipelining work through the compute graph and instantiating multiple compute graphs to process data in parallel."ĭan Olds, chief research officer at Intersect360, argued pretty vehemently against the motion. "Performance leverages the decades of work by compiler writers to optimize their compute graphs to maximize use of the hardware compute capabilities and minimize performance limiting external memory accesses.

"These graphs constitute the 'software environment' that can leverage all the hardware density and parallelism that modern semiconductor manufacturing can pack on a chip," Farber explained.
#TV CHANNEL PLAYOUT SOFTWARE FREE REDDIT HOW TO#
Interestingly, Farber suggests that the key insight is that any unification might not happen at the source code level, but within a compute graph that is generated in compilers, such as those based on LLVM, that constitutes a data structure generated by the compiler, regardless of the source language, that tells data how to flow and be crunched by the hardware. Then Rob Farber, who has worked at Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, and Pacific Northwest National Laboratory in his long career, and is now chief executive officer at TechEnablement, blew our minds a little bit with an intricate and technical argument, espousing the idea that a unified, agnostic software environment is an admirable goal, but difficult to achieve at the source code level because no one and no single machine architecture – current or yet to be designed – can be left out. Maybe Nvidia will return the favour in kind, and so might Intel with its oneAPI effort? Over and over."Īfter making this logical and hopeful argument for the convergence of the HPC and AI development tools and runtimes, Hemsoth conceded that having one stack is probably not going to happen, even if does make sense in the abstract, and that the best that we can hope for is a bunch of different stacks that can generate their own native code but also be converted to other platforms, much as AMD's ROCm platform can run CUDA code or kick out CUDA code that allows it to run on Nvidia GPU accelerators. "What is preventing us from bringing all minds to bear on solving problems instead of endlessly untangling a matrix of tools and code? Egos and near-religious adherence to preferred platforms, the not-invented-here syndrome, and lack of cooperation is the root of this particular evil and the outcome is this vicious cycle reinventing the wheel.
