Much has been written about this wave of AI which holds much promise and is an area that is exploding in many dimensions. One of the phrases used by Satya that resonated with me was that it is less about the breakthroughs in research, more about how those can be translated into frameworks and tools to enable devs to have impact in every industry: commoditize so that anyone can use it. This definitely speaks to the Innovation=invention + impact equation that I'm fond of. Where AI is concerned, it is also reassuring to have topics such as ethics and privacy front and center, and getting so much emphasis from Microsoft and others. The pace at which the frontiers of AI are moving forwards and opening up new opportunities is dizzying but it isn't just the tech it's how we build frameworks to incorporate into society, government, law etc.
The demos that really brought home the concept of intelligent edge to me were seeing both a DJI drone as well as a Qualcomm intelligent camera run the same ML models trained in the cloud, doing inferencing locally without the need to upload. As a photographer, the evolution of the DSLR into a SmartSLR has been rattling around in my head for a while now and I can really see that as a shining example of the intelligent edge. More on that in a future post. As a side note, the fact the DJI has an Azure Machine Learning SDK in beta has made the desire to pick up a drone overwhelming at this point :-)
In the same way that tasks are now accomplished using a wide variety of devices with the state stored and preserved in the cloud, so too are input modalities beyond the traditional taking off with speech and "intelligent speakers" showing the way with voice input. But why should you only use / interact with a single chosen / blessed solution? What if you want to mix and match different assistants for different tasks? The Amazon Echo / Cortana demo shows a solution that is, to me a user of Cortana, Google Assistant, Siri etc, encouraging as a trend: enabling interop between assistants. The demo showed Cortana summoning Alexa and vice versa to have each one accomplish tasks they are good at in the form of Hey Alexa, open Cortana and read calendar. I hope that in the future the trend continues to encompass more of the solutions between providers.
The remainder of Satya's keynote talked about the Microsoft Graph and how spatial and IOT data types are being incorporated via HoloLens to benefit / target "first line workers". This was brought together and portrayed by Lorain Bardeen in a "meeting of the future" demo powered by AI. I'm not going to attempt to describe / summarize this but is well worth watching.
Scott Guthrie Keynote
Scott Gu is obviously the man. I say obviously because a) he’s been an idol of mine for the past 15 years or so and is a big part of why I moved to the US and joined the company, b) he runs half of Microsoft c) he came out with the best line in all of the Build keynotes this year: upon taking a handheld mic from Scott Hanselman and holding it for him “let me add some value” so the “lesser” Scott could keep typing / doing his demo. Priceless. Oh and d) um yes OK he is now my boss as well. Aside from the overall greatness of the man himself, this keynote contained the following items that I plan on investigating further and or trying out:
Visual Studio Live Share: this is such a freaking cool idea / feature which enables collaborative / peer programing between different machines running different instances of Visual Studio and / or VS code. Instead of emailing a colleague to get help with debugging an issue, you can now start a realm-time collaborate and develop debug together, remotely. My only follow-on question here is when can I do this in c++ projects as well as .NET ones.