July 23, 2018

Bonus Module: SPRING Events: IRI Annual Conference & Trend Immersion Experience

CRASH COURSE DEEP DIVE

Recap and Key Insights about the Future of Intelligent Systems

Background: The Sourcing PRedictive Insights for New Growth (SPRING) program is a collaboration between RTI International Innovation Advisors and the Innovation Research Interchange (IRI) to explore emerging trends and their impact on innovation management. Together, SPRING team members and program participants (you) engage in a structured foresight process to help build confidence and conviction about the steps organizations can take to prepare for the future. This year, SPRING is exploring the future of intelligent systems.

The SPRING program includes a variety of activities from field trips and workshops to curated content modules and webinars, culminating in a 2-day conference (SPRINGBOARD) at RTI Headquarters on October 23 and 24, 2018. For more information about SPRINGBOARD and to register for the conference, check out our website. To sign up to be a part of the SPRING program, click here.

SPRING program elements: The SPRING program has five program elements: learning modules, trend immersions, conference sessions and workshops, thought-leader interviews, and a capstone 2-day conference. This report is a summary of a recent IRI conference and trend immersion.

Conference sessions and trend immersion field trips are opportunities to experience firsthand the signals of emerging trends by getting out of our offices and interacting with the technologies, visionaries, and companies shaping the future. These firsthand experiences stretch our thinking and lead to insights that would not be obtained from reading about them.

Description: Even though one of the core premises of a conference session or trend immersion field trip is that you must be there to get the full effect, we have done our best to bring back some signals and insights from the IRI Annual Conference and SPRING Immersion Experience. We have gathered overviews of key conference sessions dealing with both foresighting and the current and future state of intelligent systems. We have also provided a brief overview of what we saw on the Atlanta immersion field trips to Georgia Tech and to Microsoft. To give you a sense of what it was like to be there in person, we have provided the perspectives and insights from some of your IRI member and corporate peers—the Pioneers. The Pioneer group is a small and dedicated group of IRI members who actively engage with the SPRING program this year. The Pioneers are guides, interpreters, and contributors to our foresighting work, digging into this year’s topic and anchoring the SPRING research and elements for a corporate audience. In addition to this overview, we have also provided links and additional information for you to explore so you can immerse yourself.

Format: As with all of our content for SPRING, we provide different levels of engagement. The first is the CRASH COURSE, designed to be completed in 30 to 60 minutes. Next is a DEEP DIVE, which you can engage with selectively (or in whole) to get even more information on a topic. In this case, the DEEP DIVE content aims to re-create conference and trend-immersion fieldtrip experiences through videos and photos from the event.


ATLANTA CRASH COURSE MODULE

Explore IRI Annual Conference Program

The 2018 IRI Annual Conference was a very successful and active event. Held June 4 to 7, 2018, in downtown Atlanta and attracting over 300 people, the IRI Annual Conference was co-located with the National Science Foundation (NSF) Small Business Innovation Research (SBIR) program. The NSF initiative convened over 200 small businesses to profile their innovations and to network with IRI members. The IRI Annual Conference had pre-workshops and a wide variety of sessions covering the latest in innovation management. To get a sense of the wide range of topics and diverse set of expert speakers at this year’s IRI Annual Conference, please see the full agenda. This meeting was IRI’s first as the Innovation Research Interchange (it was formerly the Industrial Research Institute), and IRI’s three key new value propositions were on full display:

  • SPRING (see above)
  • TRACK: “Training Resources to Advance Competencies & Knowledge” supports IRI member companies’ needs in developing strategic innovation leaders.
  • PILOT: “Practices in Innovation Leadership, Operations, and Talent” engages members and other thought leaders to identify, sponsor, lead, support, and conduct activities that advance the understanding and development of best practices in innovation management.

The first task in this crash course is to spend some time reviewing the conference program. Look for topics in programming that will help you learn more about the current and future state of intelligent systems, futures work in general, and SPRING in specific. Review the sessions to get a sense of the different domains of intelligent systems and emerging trends. Note the kinds of topics of interest to your organization and ones that your organization may not yet, but perhaps should, be fully considering.

Next, review the SPRING update slides (IRI Members only) presented at the conference to get a feeling for how SPRING is progressing and how it fits into the overall conference programming. In the DEEP DIVE, you will find the slides and recordings from the SPRING sessions and others relevant to foresight and intelligent systems.

Check out SPRING Immersion Experience Highlights

On June 7, 2018, as a great bookend to the IRI conference, more than 20 IRI members travelled around Atlanta for the day on a SPRING Trend Immersion field trip. The immersion experience began at Georgia Tech, where the group visited three different robotics labs:

Food Processing Technology Division
Robotarium
Robot Autonomy and Interactive Learning Lab

At the Food Processing Technology Labs, we learned about the challenges to deploying robots in the field (literally) and how that is more difficult than the controlled and “structured” conditions in a manufacturing environment. We heard about several interesting applications for robots being explored at Georgia Tech, including the following:

  • Robot-to-animal interactions—In chicken houses, humans present the greatest introduction of disease. Instead robots can roam the chicken house monitoring for unsafe conditions, monitoring bird audio to detect disease, and picking up stray eggs. This allows humans to enter the house less frequently and only when necessary.
  • Intelligent cutting—Poultry processing is a dangerous and exhausting task for human workers. A robotic cutter could potentially increase yield while eliminating repetitive motion injuries. However, the complexity of dealing with “unstructured” environments and variability in poultry make this a very challenging problem to solve.
  • Remote sensing of plant stress—Drones provide the ability to fly over and scan a crop over time to detect signs of plant stress, but their time in the air is quite limited. As an alternative, Georgia Tech has created a two-armed robot that swings, like Tarzan, across a series of lines suspended above a crop. This allows for more-frequent and more-detailed imaging of the plants.
  • Collaborative robots—The simple act finding an apple that is ready to be picked on a tree is incredibly difficult for a robot. Georgia Tech is exploring how two robot arms, each with a different type of camera and visual sensor, can work together to find an apple hidden among leaves, branches, and other partial visual hindrances.

Although there is a lot of interest and work on robotics for food and agriculture applications, dealing with “real-world” unstructured environments out in the “field” still poses a lot of technical challenges.

The next stop on the field trip was one of the highlights: We visited the Robotarium, and the Director, Sean Wilson, explained and demonstrated the lab’s work. This new facility houses nearly 100 rolling and flying swarm robots that are remotely accessible to anyone. Researchers from around the globe can write their own computer programs, upload them, and get the results as the Georgia Tech machines carry out the commands. At this lab, we saw collaborative “swarm” robots work in concert to achieve rules-based objectives.

 

 

Here is a short video clip of those robots moving independently, but in coordination, to create a simple formation:

A second video clip shows the robots avoiding collisions.

Our last stop at Georgia Tech was the Robot Autonomy and Interactive Learning (RAIL) Lab, where Dr. Sonia Chernova described current research efforts to enable increased and realistic interactions between humans and robots. The RAIL Lab is working on “legible motion,” which involves creating rules for robots so that the way in which a robot moves and presents itself to a human is predictable and intuitive to the human. They are also working on ways to teach robots from semantics instead of long lists of rules. For example, if you walked into someone’s home for the first time and they asked you to grab a fork, you would likely start by looking in top drawers in the kitchen. Humans have that semantic knowledge, but robots do not. Dr. Chernova spoke about the “brittleness” of robots today. Robots can be trained to do simple tasks repetitively, but they are not yet ready for unstructured and highly variable environments. Grasping an object is easily achieved, but what if the robot accidently knocks over an object while grasping? Many robots are not yet ready to deal with such variances. Given the state of research on grasping and manipulation, Dr. Chernova believes that mobile manipulators capable of reliably grasping a wide variety of objects (think personal assistant robots) are still 10 years away.

After visiting the Georgia Tech labs, the SPRING Pioneers and other IRI member attendees had a working lunch to discuss our impressions of the morning at Georgia Tech, and then we headed off to Microsoft. We visited the Microsoft Innovation Center to experience what the latest generation of mixed- and augmented-reality technology can do. Microsoft’s mixed-reality system is known as HoloLens. The HoloLens is an untethered computer with a transparent lens. Unlike virtual reality, HoloLens projects an augmented layer on top of the normal visual field. Users can still see the real environment around them, while new augmented sights appear in their field of view. Hand and voice gestures, detected by the HoloLens headset, are used to interact with the augmented images. The hardware is completely incorporated into the headgear, so there are no wires to keep the user constrained to a particular space.

According to our host, Andrei Ermilov, HoloLens provides a new interface for the 80% of workers who do not have a desk (think first line workers). Applications for which Microsoft envisions HoloLens include augmented trouble shooting of equipment, employee training, and space planning. The demos we watched also showed virtual meetings, where remote employees shared a common view of an item, worked together to manipulate the item, and annotated the item.

Microsoft expects HoloLens and mixed-reality applications to replace desktops, mobile phones, and other current-state technology.

For more on Microsoft’s vision of the future workplace, check out this video:

Here are a few pictures of the visit and our group at the Microsoft Hololens demonstration:

 

Read RTI and Pioneer Perspectives

RTI and IRI had many representatives at the IRI Annual Conference, and we teamed up with the Pioneers to experience and reflect upon this year’s conference and the SPRING Immersion Experience. Here are the unique observations and key takeaways of a few of these participants.


RTI Expert Perspectives

Jeff Cope

Background: Jeff Cope is the director of strategy and innovation in RTI’s Innovation Advisors and a long time IRI member.

Areas of expertise: emerging technology trends, innovation strategy and management, technology and market research, new technology commercialization, innovation ecosystems

Contact: jcope@rti.org

What were the most impactful experiences and top takeaways from the IRI Annual Conference?

For me, the top takeaway this year is an observation about the pace at which new technologies are moving to commercial application. In particular, it’s the variance of that pace that I find so striking. The Annual Conference really brought this point home for me. Here’s why:

On one end of the spectrum, we have Tan Le and Emotiv, who have successfully miniaturized EEG and reduced the cost to a point that it has now become a transformative technology. The applications she described—helping a paralyzed girl express herself and even control her own mobility, a quadriplegic driving a race car with his mind, controlling a robot using our minds—were astonishing and inspirational. Where Emotiv takes the technology next, into drug-free alternatives for ADHD treatments or to diagnose mental illness, is simply incredible. I sat there in awe, thinking, “How did this come about so fast?” I recall that my team worked with NASA a decade ago to commercialize EEG biofeedback technology, which could alter the playing experience of a video game. Seeing all the new applications for the technology, actual and planned, was simply amazing.

Contrast this mind-blowing experience (pardon the pun) with our tour of the robotics labs at Georgia Tech. Both EEG and robotics are technologies that have been around for decades. In the case of Emotiv, they have found ways to reduce EEG hardware cost and improve its software to increase its use. Similarly, a lot of investment has been made to do the same for robots—to move them “out of the cage” of a controlled manufacturing environment and into more social and collaborative applications.

Headed for the robotics tours at Georgia Tech, I was excited to hear about the new achievements in robotics and autonomy and where and when we will start to see robots working more-closely with humans. I was quite surprised to find that the challenges facing widespread adoption of robots over a decade ago are still the same challenges today. Robots were described by Georgia Tech researchers as “brittle.” They can do one thing “really well,” especially if that thing is repeatable and in a well-controlled environment. Ask a robot to do more than one thing, or to do it in environments that are not always the same, and their abilities drop off dramatically. Still. Now, Georgia Tech is no slouch in the robotics field. They, along with Carnegie Mellon University and the Massachusetts Institute of Technology, are strong academic contributors to advancing the field in the United States. So, it was incredibly surprising to hear their researchers collectively say that the hurdles that remain to break out of that controlled, repeated environment and enable more autonomy, more collaboration between humans and robots are still surprisingly high. Grasping more than one type of item, knowing what to do if an item is knocked over accidently, being able to “see” objects like humans see, all these basic capabilities were described as long-term aspirations at this point. In an age of pending autonomous vehicles, this realization was a bit of a wake-up call for me.

“I was quite surprised to find that the challenges facing widespread adoption of robots over a decade ago are still the same challenges today.”


Angel Hedberg

Background: Angel Hedberg is a corporate strategy manager at RTI International. Her primary responsibilities include environmental scanning and analyses of trends, competitor performance and markets to inform growth, business development, and client-relationship management strategies. She brings a unique perspective to intelligence and the dynamics of collaboration and knowledge sharing from 14 years of experience across market intelligence, strategy and business development planning, sales and operations training, and pipeline management.

Areas of expertise: business problem identification, articulation, and communication; strategic planning; competitive intelligence; foresight

Contact: ahedberg@rti.org

What were the most impactful experiences and key takeaways from the IRI Annual Conference?

Formal futures foresighting work is a nascent function within many of the organizations in the IRI community, but many of its elements are not new to the innovation community. Attempting to predict what will happen or will be needed by users in the future is at the heart of innovation, so there are some important similarities to future work. Yet, there are also clear differences. The difference is that the innovation forecasting is typically project-based, and the ultimate goal is to serve client(s) in new ways. It helps to start with a user problem, but an unbiased exploration of trends and alternative futures is important to understand the context of how that user problem will exist in the future. Innovation experts bring good habits to forecasting: being open to new ideas and information, thinking differently, and acting to shape the future. The challenge in foresight is to produce a good vision or story of the future, not a case study.

I encountered examples of these behaviors and stories throughout the IRI conference. The Brilliant Failures team shared their insight that “when you take ownership, you learn.” In futures foresighting work, when you participate in the process, your experience is part of the insight. You cannot unlearn what you have created in foresight. Matt Hermstedt, senior director of R&D Accudyne Industries, offered six steps toward creativity. The key takeaway was that it takes practice to be creative; the same is true in developing competencies for seeing and sharing foresight. Jason Wild, Salesforce, shared, “If you aren’t inventing, you are outsourcing your future.” Preferred Futures, a foresight tool, is often used to help create the future you want to see happen. In envisioning this future, it can drive actions to make a preferred future. However, in the formal futures context, foresighting is not about looking for a right answer. Once you develop your preferred future, you can also identify ways to respond to change that’s happening today, to anticipate and plan for the new tensions and boundaries that will arise, and to prepare your organizations for those changes.

I’m excited to see how the innovation community uses their bias toward exploration and completion as strength as they work with futures foresighting. The final step of foresight is action—but, it is also the hardest step to reach. Innovation professionals who also learn the practices of foresighting may just present the ideal combination of skillsets and mindsets to drive and shape the future.

“Innovation professionals may just present the ideal combination of skillset and mindset to drive to the future.”


Pioneer Perspectives

Meet Our Pioneers

Joe Fox, Director, Emerging & External Technologies at Ashland, Inc.

Mike Blackburn, Research & Development Portfolio/Program Leader at Cargill

Steve Moskowitz, Strategic Innovation Manager at Entegris

 

 

 

 

 

 

Jennifer Peavey, Innovation Designer at Eastman Chemical Company

Tim Dennison, Executive Director, Process Platforms, Corporate R&D at Sealed Air Corporation

Ashish Vasil, Senior Vision Systems Specialist at The Timken Company

 

 

 

 

 

1.  What was your primary reason for attending the SPRING Immersion Experience at Georgia Tech and Microsoft?

Joe Fox: I wanted to get a feeling for the state of the art in intelligent systems, particularly in the areas of robotics and virtual/mixed reality.

Mike Blackburn: As the SPRING activities have been ramping up over the past few months, and being part of the SPRING Pioneers group, this was an opportunity not just to see the state of technology but also to connect with others who are part of SPRING. A goal is to build a community of practice around foresights. Getting together as a group was an important step in building this community.

Visiting the labs at Georgia Tech and the Microsoft Innovation Center gave us the opportunity to see and experience technology and to think about the future from a perspective that is different than our normal working environment. Interacting as a group about what we saw and what we were thinking helped us to broaden our views and open us to new ideas. This was a very good exercise to help us think of the future as a group.

Steve Moskowitz: We recently decided to fully participate in the IRI SPRING program, and this was my first chance to engage with the team and begin thinking about the possibilities of intelligent systems in practical terms, not just presentations.

Jennifer Peavey: I was most interested in networking with other likeminded individuals who are interested in innovation and intelligent systems.

Tim Dennison: I wanted to see new technology pertaining to robotics and to understand the current capabilities and applications for robotics.

Ashish Vasil: I was looking forward to updating myself on the current areas of research related to Robotics, Automation and Intelligent systems at Georgia Tech. I was also interested in the opportunity to discuss with a Microsoft developer the current and future direction for mixed reality.

2.  What were the most impactful experiences from the SPRING Immersion Experience at Georgia Tech and Microsoft?

Joe Fox: The mixed-reality experience at Microsoft. I had experienced virtual reality before, but not mixed reality.

Mike Blackburn: When it comes to robots and artificial intelligence, we are often influenced by the science fiction stories and movies that we see. We build a picture in our minds and often believe some parts of technology are far ahead of reality, while other technology capabilities we didn’t expect are already here. For example, a robot that acts like a human could be conceived as possible when, in reality, it is highly specified for particular tasks. The challenges of some of the basic things we can do as humans are still very difficult. At the same time, when a robot moves toward us, our response is to back up with uncertainty. How a robot interacts in an environment with humans is also a huge challenge. At the same time, augmented reality can be used today. The potential applications were easy for the group to identify. The deployment of this technology could be realized in the near future. These experiences were very helpful as we think about future possibilities.

Steve Moskowitz: At both the robotics labs and the Microsoft HoloLens discussion, it was interesting to learn about how we (humans) use our senses and the differences with robots and augmented reality. While both forms of intelligent systems allow us to expand our senses, they are also both limited in that they don’t have the use of sense or smell (or taste) to increase their overall awareness. One of the big ah-ha moments for me was to realize that we are still designing and developing robots and VR/AR [virtual reality/augmented reality] based off of what humans know and do and how we interact. I believe a big breakthrough will come when we start designing for what they CAN do, even if it is vastly different from how a human would solve the problem.

Jennifer Peavey: The ability to try out the HoloLens, I think, was the most impactful. It allowed one to fully understand what the technology is like, where it presently is, and then, hopefully, see where it is going.

Tim Dennison: I was intrigued by the possibilities of applying the Microsoft HoloLens in education and remote support.

Ashish Vasil: It was quite informative to learn about the different areas of research at Georgia Tech. I had the opportunity to experience demonstrations and discuss with developers projects in the areas of swarm robotics, collaborative robots and the different projects in the Agricultural Technology Research Program.

It was exciting to see the efforts in the area of swarm robots in the Robotarium and to discuss future direction for this technology. It reminded me of “Prey” (Micheal Crichton). Distributed AI looked so much in reach. Having the Robotatium as an open resource for development and testing to users from around the world was wonderful.

The Brachiating swinging robot (Tarzan) for extended monitoring of large field areas was illuminating. Indisputable need as a tool for the future of farming.

There were other projects addressing the needs of the poultry industry that would also have applicability in other industries. Specifically, it was great to see how the human expertise that has finessed over time on efficient handling of poultry for removing the meat was captured and codified into robot moves. This could carry over to similar applications where it is beneficial to capture expertise that is not easily translated to simple programmatic set of motions. Also, the idea of monitoring sound from large poultry coops for any variation from the norm to identify any stress or difference that needed attention was innovative and was heartening to see it implemented.

Overall it was not just the ideas that have been automated at Georgia Tech, but the level of detail that had to be identified and addressed in each of these systems to get these systems to work was impressive.

 It was a great immersive experience to test out mixed reality first hand. I had the opportunity to discuss the technology, different facets of applying this to industrial applications and how best to engage Microsoft for the Hololens platform. This technology, as applied to current project on holoportation, seems very exciting for communications for the future.

 3.  What were your top three takeaways from the day?

Joe Fox:

  1. Even the simplest human motions and tasks are very difficult for a robot to do.
  2. When you think about robotics, think about robots and humans working together, not robots replacing humans.
  3. With takeaway #2 in mind, Ashland needs to look more closely at how mixed reality might increase the effectiveness of our tech-service organization.

Mike Blackburn:

  1. In the area of robotics, there are many things we can do today; however, there are still fundamental challenges that need to be addressed before a robot will autonomously learn and perform what are simple tasks for a human.
  2. As we think about the future, we are often limited by our past. Integration of technology into the way we work is a key to the future. We cannot just do what we have always done with automation; we must think about new ways of working and interacting whether virtually or with robots.
  3. A key to future advancements is cross discipline collaboration. Looking at problems from multiple points of view is needed to solve the complex problems of today.

Steve Moskowitz:

  1. Even given what I said previously, my biggest takeaway is that, for the foreseeable future, intelligent systems will (and must) operate within the realm of human activities, so a big focus must be placed on understanding the psychology of these interactions, not just the technology to support what is possible.
  2. I see a tremendous amount of opportunity in the HoloLens applications, but not necessarily for R&D or product development right now. Similar to what we saw with the Big Data team, where the initial uses of Big Data were with customers and the marketing/support side of business, I feel the same is true for HoloLens (and similar applications). The initial uses will be around field service, support, troubleshooting, maintenance, etc. and eventually will move back to the R&D realm.
  3. I really liked the idea of using new techniques to look inside our products in non‑destructive ways. At Georgia Tech, they showed examples of using CT scans to look at what is happening inside a product, and at Microsoft, we talked about using HoloLens to “shrink” a person (similar to the book The Incredible Journey) to the size where we can walk inside our products to see things from very different perspectives.

Jennifer Peavey:

  1. There is a need for STEM [science, technology, engineering, and math] research to take a moment and think about why the research is being performed—What is the impact? What does it mean to the economy? to people? to the world?
  2. Makerspace hacking can extend to algorithms and people’s behavior can be simulated through robots.
  3. There is a huge separation between how robots work in themselves and how they interact with each other and the world. It concerns me that progress will be slowed.

Tim Dennison:

  1. Robots are not as smart or capable as I expected. It’s so difficult to replicate basic human capabilities in robotics.
  2. The swarming robots were interesting, especially in the application of collision avoidance. This will be needed in the future of autonomous vehicles and delivery.
  3. The HoloLens could have a great variety of applications in remote services, education, and entertainment.

Ashish Vasil:

  1. The future seems very encouraging for addressing several societal needs including assigning mundane and repetitive or risky/challenging tasks in non-conducive environments to automated tools/robots, freeing up humans to invent/develop on other fronts. There have been concerns about the possible encroachment of robots into jobs traditionally held by humans. There is always a balance to be struck between the two viewpoints. Using the robots to assist in executing tasks opens up other fronts for humans to step up and explore.
  2. Having an open platform for testing swarm robots was an incredible resource for crowd optimization of technology. This would allow for faster and more optimized development.
  3. The exciting part of this experience was to hold these discussions in an environment of peers and developers with similar interests in different facets of the industry and share use cases and concerns. It was reinforcing that our queries and concerns were quite similar irrespective of which industry we were coming from. We were aligned on moving in the same direction in the future and were looking for answers, support, resources, example use cases which we could leverage for our future application directions.

ATLANTA DEEP DIVE MODULE

Here, we have provided you with some of the recorded sessions and links to additional information reported on at the IRI Annual Conference. Our hope is that it can give you enough of a taste to develop your own insights. We encourage you to engage with some of the following content that is outside of your typical area of expertise or day-to-day role. (Note: This content is only available to IRI members)

Keynotes about the Future of Intelligent Systems (2.5 hours)

Two visionaries stepped on the IRI Conference main stage, and fortunately for us, their full interviews were recorded!

Tan Le
CEO & Founder, Emotiv
2018 IRI Achievement Award Recipient
The NeuroGeneration

Achievement Award Keynote: The Neurogeneration and IRI Pilot: Practice in Innovation Leadership, Operations, and Talent

 

 

Yann LeCun
Chief Artificial Intelligence (AI) Scientist, Facebook
2018 Medalist
The Power and Limits of Deep Learning

Medalist Keynote – The Power and Limits of Deep Learning

 

 

 

Fireside Chat: The Future of A.I. and Innovation

 

 

 


Breakout Sessions on the Future and Intelligent Systems (2.5 hours)

By design, this year’s IRI Annual Conference was, in itself, a great event for exploring and thinking about foresighting and the future of intelligent systems. The following sessions covered topics related to futures work being done by different organizations:

Perspectives in Strategic Foresight
Back to the Future

These next sessions provided a great overview of this year’s SPRING initiative and began exploring and working with our research to date on the future of intelligent systems:

SPRING Trend Immersion: Moving from Information to Insights
SPRING Immersion: From Insights to Implications

These final sessions and keynotes provided the insights of thought leaders and experts as they described the current and futures state of important technologies and trends at the forefront of intelligent systems:

VR Training: Educating the 21st Century Workforce
The Future of the Blockchain
How to Manage Technology in an Exponential World
Jill Watson, Family & Friends
A.I. and Cross-Industry Impact
Should We Be Worried About a Jobless Future?
  
BACK TO LEARNING MODULES