A Constellation Research DisrupTV panel riffed on generative AI use cases, regulation, ethics and how technology can build resilience.

Here are some of the big themes that emerged from the DisrupTV discussion:

  • Dr. Anthony Scriffignano, Global award-winning Chief Data Scientist
  • Sharron McPherson, CEO of The Green Jobs Machine, Adjunct Senior Lecturer at the University of Cape Town Graduate School of Business, and Faculty at Singularity University
  • Natalie Barrett, Senior Fellow at Atlantic Council

Regulation. Barrett said her biggest concern is that legislation hasn't caught up with AI and will be perpetually behind if "something happens where you'd want someone to be accountable."

McPherson agreed: "AI and other kinds of technologies can influence global thinking with ideas. Power is shifting from government to companies and from companies to people. AI is shifting power in our society, and I think about voice and agency. We're going to have lots of changes in policy around AI because of the concerns people are starting to have. It's too early to imagine what the impact might be or whether it's not enough. We are not jumping in early enough with the right policies. How am I as a legislator going to legislate something that doesn't have transparency."

Scriffignano added that regulation can prevent bad things from happen as well as the good. "The challenge put on these regulators is that innovation is always going to outpace regulation," he explained.

AI is a tool, but... McPherson noted that AI is a tool that can be used for both good and bad.

Scriffignano said there's a risk that large language models are ingesting everything that's being said and create misinformation and disinformation.

"It's very easy to confuse them (LLMs)," he said. "When we say generative AI what we mean is it is outputting or summarizing what is read or consumed, but that can also be digital hearsay from sources that are no invisible to us. We invented a new hammer so let's be careful about how we use that hammer."

On the positive side, Scriffignano said generative AI can summarize data and text well. "There's no way you could read everything that's being published in any field of medicine right now so AI can give me a summary of what's being said so I can be a physician and not a researcher," he said.

Scriffignano added that AI can also be used to track behavior and ultimately prevent bad things from happening.

Resilience technology. McPherson said she spends a lot of time thinking about using technologies like AI to shift capital into places that need it, notably marginalized communities at the intersection of climate and technology.

"How do you leverage technologies like Earth observation technologies to get data and build resilience technologies," said McPherson. "We have been in stealth mode trying to build an index that measures the resilience and vulnerability of a place or physical asset and provide recommendations that are actionable to save lives and livelihoods."

McPherson said her group has focused on digging deep into how we use data and AI to "begin to solve for climate resilience." "We are going to have millions of people leaving their homes and migrate (due to climate)," she said. "Think about the implications for our global economy and everything you care about. There's no silver bullet to this, but it really is about matching and making sure that we understand what resilience is and you need data for that."

Defining AI use cases. Barrett said there are multiple AI use cases to ponder including medical assessments, or a "doctor in a box" that can be sent to disadvantaged areas.

She said:

"We have validated data sets where we do have protocols and processes that we need to use whether it's for financial market, climate security, medical treatment and personalized medicine."

Human intelligence vs. artificial intelligence. Barnett said that human intelligence (HI) will be more valuable than AI due to authenticity.

Barnett said:

"An artist or someone who writes a poem for some reason leaves a part of the soul, but we just get a photocopy of that poem (with AI). No matter what HI will always be more valuable. AI should augment a human to be valuable."

Responsible AI. Barnett said equity in AI in the US should focus on the equality of the data used as well as the quality of results. "AI if used properly can be a democratic tool that pulls the power back to the people but only if used in the right use cases," she said. "It also has to be traceable, which means I need to know where the data came from and what the algorithm did. I need to be able to test it in some way and it has to be governable."

Scriffignano said that transparency in AI is easier said than done.

"We can't necessarily explain why we reach certain decisions as humans. We can think after the fact or rationalize how we made a decision, but the reality is a lot more complicated than we think. We're constantly simplifying the world around us."

Scriffignano ran through multiple ethical quandaries with algorithms including privacy when using data for commercial purposes vs. a crisis where "the needs of many outweigh the needs of one." "I'm glad I don't have to be an ethicist because it's really hard to make these decisions and there isn't a black and white answer. And that answer changes depending on where you are in the world and what your country values," he said.