And China already had its own search engine, its own cloud computing services, its own AI labs, even its own TensorFlow. Called PaddlePaddle, it was built by Baidu. ‘Genius Makers’ by Cade Metz, pg. 225
What Is A.I.?
I do not know what Artifical Intelligence (A.I.) is. Don’t lie, you don’t either. The world has been talking about A.I. for decades, if not centuries. We’ve had A.I. in games since at least the 1980’s. But what is it, really?
I’m trying to read more about the subject in order to hopefully get a better handle on this technology space, which is one reason I picked-up Cade Metz’s genius book on the subject, ‘Genius Makers’. Cade zeros-in on the activities of some of the biggest Silicon Valley Tech Companies (oh, and also Baidu) to help take the pulse of where exactly we are with regard to the current state of affairs in the technological capabilities of ‘Artifical Intelligence’, or perhaps more appropriately named, ‘Machine Learning.’ Much of the research in academia on the subject is getting soaked-up by big tech firms (and probably Government as well), so most of humanity is not privy to what is really currently possible in the field.
But the Cloud Computing companies seem particularly open to providing A.I.-related web services for customers to consume and build-on in the never-ending tech innovation cycles of both cloud computing and A.I. Web services such as AWS’ ‘Rekognition’ use the power of Machine Learning to help identify images within images, and natural language processing services like ‘Amazon Polly’ help to process and analyze language in spoken and written form.
The power of these cloud computing services are extremely hard for smaller tech companies to ignore when trying to keep-pace and offer similar capabilities to customers. The secret weapon of cloud computing companies is seemingly infinite compute capacity (vast arrays of GPUs/CPUs/ML Chips, infinite storage, etc).
‘Genius Makers’ points this out when discussing Google’s Jeff Dean’s development of a computer chip specifically to facilitate running TensorFlow Machine Learning jobs on the Google Cloud Computing environment (pg. 221).
OpenAI was created in 2015 by Elon Musk, Sam Altman (Y-Combinator) and others in order to create an A.I. lab to contribute A.I.-related research, projects and software back to the community in order to provide a counterbalance of ‘good’ to the potential ‘evil’ A.I. that might be built in secret labs around the globe. ‘Genius Makers’ points out that keeping researchers involved in OpenAI is apparently quite difficult as the big tech firms offer big money to woo tech talent away from the project.
Then there is the difficulty in keeping the software developed by OpenAI freely available and open, as exemplified by Microsoft’s recent exclusive licensing arragement with GPT-3.
Elon Musk resigned from the OpenAI Board in 2018.
Building Smarter Humans
Cade Metz’s presentation on the evolution of AlphaGo was especially fascinating to me. The fact that the DeepMind group within Alphabet created a software capability to out-play the best Go Players in the world was not particularly surprising, but the fact that the best Go players in the world could further improve their game by learning how the DeepMind software played against them was especially intriguing to me.
Perhaps the real goal of ‘Machine Learning’ is not to learn how to teach computers to do things better by training them with copious amounts of data, but rather to teach humans how to step-up their game in whatever task they seek to dominate.
podcast learning deep learning neural networks cade metz jeff hinton andrew ng boltzmann machine back propagation artificial general intelligence alphago generative neural network google brain deepmind jeff dean ian goodfellow GAN demis hassabis go artificial intelligence