SXSW: Better than a Tornado Full of Sharks
This SXSW recap is part of our LiquidSpace Voices content campaign, featuring a new author from the LiquidSpace team each week
If you are susceptible to FOMO, SXSW is not for you. At the same 3:00 PM time slot on a Tuesday there is a talk on Conversational UIs, Space Exploration, Millennials and Jazz, plus a couple of Happy Hours and a Dutch Rave happening. At SXSW, every surface becomes a canvas for advertisers and every interaction, a possible business transaction. Mostly sticking to my rule of two drinks a day and wearing comfortable shoes, here’s my recap of the insights I absorbed from the informative talks at SXSW this year.
One thing is clear: Austin before and after SXSW has its own flavor of hipster-techy vibe, with a food truck on every corner and a dash of southern hospitality conducive to living well and starting and growing your business. So with that in mind we put together our top picks for your next team office in Austin.
From Alexa to Google Home, voice activated interfaces, deep learning, and AI are dominating our conversations at SXSW and beyond. The concern is that the government is only spending 1/8th of the amount the private sector is spending, and this number is projected to decrease with the new administration. We need a long term view and a focus on infrastructure to continue to push the AI frontier in the US.
This SXSW panel found that the rate of change and adaptivity to new technology in government organizations is not keeping up with the rate of change of the technology itself. A large concern is the shallow talent pool with a majority coming from academia. The panel urged the SXSW crowd to push for diversity and investment in STEAM — Science, Technology, Engineering, Arts, and Math education to prepare the future generation for this AI frontier.
Conversation interfaces are here to stay. Born from advances in natural language processing, AI, deep learning and large data platforms such as Facebook, Amazon and Google, the #talkingweb is taking off.
“The way we’ve bought on the internet has been through glass interfaces” said Chris Messina, product designer
We have access to thousands of interactions with glass devices and can learn to relax the physical constraints of buttons to create visual representations instead. With voice interfaces, the learning curve is going to be much faster as people will be able to provide direct feedback. Design will adapt to recognize and prompt verbs and request patterns. The prediction is that people will be able to speak to computers directly, instead of relying on a pre-defined informational architecture that may or may not have accounted for their specific needs and provide contextually aware answers.
The promise of these experiences is seamlessness, as both identity and payment will already be ingrained in the large platforms. However, these platforms threaten the open web, the panel was looking forward to an open source movement in this space.
As we head in the direction of making interfaces more like us, how do we inject personality, contextual awareness, empathy and social fabric into the experience? Otherwise as Dries Buytaert, creator of Drupal, put it:
“It feels like I am locked in a room with just one person.”
We are a long way from making conversational interfaces human. There is a risk that our command line speak is not accounting for the key elements of culture, empathy, and context that matters — turning us into assholes.
“Cognitively, I’m not sure a kid gets why you can boss Alexa around but not a person.” -Hunter Walk, Amazon Echo Is Magical. It’s Also Turning My Kid into an Asshole.
AI is everywhere, from curing cancer to beating human poker players. Its abilities are going beyond what people can recognize by analyzing massive amounts of data with pattern recognition and rapid learning. Nvidia is building “the brains” of self driving cars (24 trillion computations a second, about the power of 150 Macbook Pros).
Training includes taking a pattern of driving, such as driving in fair weather, and applying it to bad weather to determine carryover and accuracy. More human-centered analysis focuses on copilot tracking, based on gaze to determine the driver’s state and what assistance the car can provide. Important scenarios, such as a child running across the street, feed into the data set with higher frequencies to make the algorithms learn critical, although infrequent, scenarios.
Self driving cars are already here. With Roborace there’s a fleet of 20 one-design self driving cars in the Formula E circuit. Volvo is allowing people to “drive” fully autonomous cars in Sweden, and courses like the Udacity self driving car Nanodegree are consistently overbooked.
How do we translate brain activity to our limbs, when the pathways to them have been damaged? Unlike the movies, we see prosthetics have not progressed significantly from the body movement activated ones of the 1880s. However, strides in brain implant and peripheral muscle implants that bypass the injured site are beginning to show progress in helping the disabled regain movement. 50 million in the US have different stages of disability. Most have a hard time getting a job and those that do get a job, receive significantly lower pay than their non-disabled counterparts.
Jennifer French, a quadriplegic paralyzed from the waist down as a result of a snowboarding accident in 1998, is the 6th recipient of experimental neuroprosthetic system that allows her to stand up and use her own leg muscles. Her muscles are stimulated by 24 surgically implanted electrodes. She is now a sailing Paralympic athlete. Jennifer urged the FDA and CMS to work together in order to approve and reimburse medical device treatments instead of making them the last resort, way below drug therapies. The technology that allowed Jennifer to stand since 18 years ago has not progressed since. Longer horizon investment and more attention from the media and the talented workforce are needed to advance neuroprosthetics.
Some people do original things in the world, Adam Grant studies them. Here’s what he’s discovered.
Originals — aren’t risk-seekers
They might look like they are, but they don’t like the idea of failing. People who keep their day jobs, when starting a new venture, are 33% less likely to fail.
Originals — avoid false negatives rejecting ideas with lots of attention
The problem with intuition is that it is the subconscious’ pattern recognition system thus it is terrible at judging new ideas. The best people at providing feedback are those not in the industry and can look at it with fresh eyes.
Originals — make the unfamiliar familiar
Don’t play it safe. Communicate effectively. It could take 10–20 exposures to your idea for someone to appreciate it. Create a feeling of familiarity on the first pitch. Build a bridge to something that people understand. This is why it is really important to learn about things outside of your field.
Originals — admit their weaknesses
Originals — hire differently
How do you build a team of original thinkers? Culture fit over time becomes synonymous with group think and weeds out diversity of thought.
Originals — fight groupthink
If you often say “don’t bring me problems bring me solutions” you’ll never hear about all the things not actually working.
Scott Evans, Max Oglesbee
Hudson Yards and Intersection are building a full neighborhood for the next connected generation that is responsive to the mediums people use to live, and interact with their physical world. This is the largest privately funded real estate development in the history of the US.
Infrastructure and technology working together, Hudson Yard combines what people bring into the built environment with the infrastructure built to adapt to these changing technologies to create a zero friction environment. For success, several pillars need to work together. These are:
- Connectivity always on
- Recognition — who you are and your context
- Location — who, where, when, why
- Transaction — frictionless payment
- Sensing — computer vision and machine learning to learn about people and environments
- Integration — getting all the pieces together
David Fuller, Lauren Silverman, Robin Murphy, Scott Niekum
There is a zealotry in the media that portrays deep learning as very similar to the brain, however, from a computational perspective we are nowhere close to that. Robots are being used in situations where neither humans nor animals can fulfill that need. For example disaster robots were used for search and recovery in both 911 and Hurricane Katrina. Robin the Director of Robot Assisted Search and Rescue stated:
“It’s almost unethical to have a technology that could reduce disaster recovery by 10 days, and not use it.”
There are significant lags in using robots in search and rescue capacities, because standards need to be put in place first. But the Catch 22 is how do you put a standard together for something that just came to market? This creates a twenty plus year lag on bringing robotics to where it’s needed.
Robots are hard; video demos are easy. Robots need to be built as an end to end system integrated with their environment where they can absorb and learn from. Reliable, precise, mechanical, secure platforms need to be created to support robots coming online. A continuing theme is the need for both long term government investment as well as training the talent.
Predictions from SXSW
Robin predicted that the approximately 700 disasters every year will have a more dire impact with populations densifying. Prediction robots will be used more for the humanitarian response and prevention. For example flooding is the number 1 disaster each year. Emergency managers can start evaluating topographical maps for elevation and high risk zones to better prepare for these disasters.
A move to big data for robotics. Right now there are many dispersed labs and medium data that is not shared between robot systems. The panel predicts that in the next 5 years this sporadic data will be aggregated and shared, such that algorithms and robots can be trained.
This SXSW panel saw a huge need in robots as companions especially for the elderly. With that we need to focus on making robots human friendly, so they fit into the changing social fabric.
Katie Morzinski, Jayne Birkby, Patrick McCarthy, Taft Admandroff
A new telescope, GMT ,the Giant Magellan Telescope is slated to be complete by the end of this decade in the remote part of the Chilean Andes, far away from the light of cities and it will push our planetary exploration capabilities into further galaxies.
The Kepler Space telescope has already found over 1,280 exoplanets in our Milky Way Galaxy with over 10 potentially capable of supporting life as we know it. In most star systems, however, the planets are much closer in, and much larger than ours, making comparative predictions harder, because we do not have analogous planets in our solar system.
One technique for identifying planets is called adaptive optics. It works similar to using our thumb to block out the light reflecting of the moon when looking at a starry night, except astronomers block out the really large star to see the much smaller planets next to it. Next scientists look for signs of oxygen, methane, ozone and carbon monoxide a combination that is more difficult to create inorganically and a signature similar to that of Earth, by using the transfusion of light from the star around the planet.
If there is life on other planets, it remains a mystery. But what’s certain is that Austin is happening now, and it’s more than just the host city for SXSW. It’s entrepreneurial with a hipster vibe, paired with strong technology chops and a love for nature, food, and music, all in all making it a great city to start and grow your business. Don’t wait for next year at SXSW. If your team is in Austin you can get into the perfect team space today. We’ve selected our top picks for your next team office, check it out here.
Thanks for reading this SXSW recap. We hope you stay tuned for more upcoming LiquidSpace Voices content, with a new author each week.
Head of Product / Crafty Innovator