Webinar Recap: AI & Data Privacy—Navigating the Opportunities & Risks

1 month ago
Webinar Recap: AI & Data Privacy—Navigating the Opportunities & Risks Image

In this discussion, CEOs Tim Hayden of Brain+Trust and Brad Weber of InspiringApps break down what’s behind the major transformation in data privacy and artificial intelligence—and the opportunities and risks it presents to businesses today.

You can watch the webinar replay or skim through the conversation explored below.

Data Privacy & AI: The Future Is Here

AI has revolutionized how we live, work, play, and communicate. Tools like ChatGPT have tremendous potential to enhance our lives but can also pose significant risks if not harnessed properly. In the latest webinar, Brad and Tim explore privacy concerns as well as the tremendous benefits of artificial intelligence.

Are you curious about the future of data privacy and how AI will shape it? Join the CEOs of Brain+Trust and InspiringApps as they discuss what’s behind the rapid transformation in data privacy and where it’s headed next.

Join Us for the Next Webinar

Join us for our next live webinar for more InspiringApps insights. Sign up to attend our next webinar here.

Watch the Full Webinar

 

About Brain+Trust: Brain+Trust is a strategic consultancy empowering brands to compete at the speed of the customer and grow revenue. By helping brands and leaders make sense of an evolving marketplace, they can better understand their customers and grow their business. Brain+Trust Partners counsels companies and organizations with strategic consulting, governance and compliance, automation, product development and more.

About Tim Hayden: With more than 20 years of marketing and business leadership experience, Tim Hayden has been a founder of new ventures and a catalyst for transformational progress within some of the world’s largest brands. Part social anthropologist, part strategic business executive, Tim studies human behavior and how media and mobility are reshaping all of business. From operations to marketing and customer service, he assembles technology and communications initiatives that lead to efficiency and revenue growth. A past and current investor/advisor to technology startups, Tim works with entrepreneurs and ventures to capitalize on opportunities and shifts across many industries. He also proudly serves in executive board and volunteer leadership positions with non-profit organizations.

About InspiringApps: App development that makes an impact. InspiringApps builds digital products that help companies impact their employees, customers, and communities. Yes, we build web, mobile, and custom apps, but what we offer is something above and beyond that. What we offer is inspiration. Our award-winning work has included 200+ apps since the dawn of the iPhone. Our core values: integrity, respect, commitment, inclusivity, and empathy. Our guarantee: finish line, every time, for every project. Get in touch at hello@InspiringApps.com.

About Brad Weber: Brad Weber has more than 25 years of software development experience. Brad received his MBA from the Leeds School of Business at the University of Colorado and spent several years with Accenture before striking off on his own adventures, including the successful founding of four different technology companies. With a passion for software artisanship, Brad founded InspiringApps to build a team that could tackle larger app development challenges than he was able to handle on his own. His leadership creates an environment where the most innovative digital products continue to come to life.

Read the Transcript

Stephanie Mikuls

Thank you for joining us today on our webinar on “AI & Data Privacy—Navigating the Risks & Opportunities” of this rapidly evolving landscape that we have right now. I’m Stephanie, the marketing director here at InspiringApps. Welcome.

This is a joint session between InspiringApps and Brain+Trust, the second in a series of webinars where we’re talking about data privacy restrictions, artificial intelligence, and app and software development in this new space.

Brad and Tim, I’ll have you guys introduce yourselves, and then we can just get into it.

Tim Hayden

Sure, thanks. Brad—great to see you again. I’m Tim Hayden, the CEO of Brain+Trust. At Brain+Trust, we help companies with their first-party data, usually with customer data platforms or hybrid cloud environments. And we are exploring the world of artificial intelligence in an effort to help companies with both privacy and cybersecurity, but also all the mapping of how to go about implementing customer data platforms and hybrid cloud management.

I’m excited to be here with you, Brad, because, of course, to leverage all that data, many times you need apps, and that’s what you guys do.

Brad Weber

That’s true. Thanks. Good to see you again, Tim. I’m Brad Weber. I’m the founder and president of InspiringApps. We design and develop custom web and mobile apps for funded startups and large enterprises and have experience across this space. Certainly, many of our customers are running into the data challenges and privacy challenges that we’ve been talking about in this series. So I’m excited to dive more deeply with you into that, Tim. And today, our focus is on artificial intelligence, specifically, as you mentioned, and the data privacy implications of that.

I thought I’d take just a moment and take care of a little “artificial intelligence” housekeeping and talk about what it is, just at a very basic level.

As you mentioned, we develop and create custom software solutions. The majority of those solutions—especially in recent decades—have been explicitly programmed. So a lot of what we create is a matter of implementing business rules. It’s: “If this customer’s age is greater than X, then do this.” Or, “If the bank balance is greater than Y, then do this.” There are a lot of explicit rules in the code that cause it to behave the way that it does—and those are very black and white.

What we’re talking about with artificial intelligence is really getting into the gray areas and giving the computers and the processors the opportunity to come up with their own solutions. They don’t necessarily have to be explicitly programmed in order to reach conclusions.

Those conclusions may be things that you’ve seen in your life. It could be an identification of an object—there are popular apps that will help you identify a planet or a flower or that type of food based on the photo that you take. That’s artificial intelligence that’s driving that to support object recognition in that kind of application.

There are also many applications where the technology of the system is given a goal or an outcome, and it has a lot more flexibility in determining what the appropriate or best path is and in reaching that outcome.

In order to accomplish all of these things, you need a lot of data. It is not enough to tell the difference between an apple and an orange or an orange and a banana reliably by just using one photo of each. Models that do that sort of thing rely on hundreds or thousands of examples to make those kinds of determinations.

Certainly, as you get into more complex challenges like self-driving vehicles or fraud detection in transactions, you need lots and lots and lots of data. And the focus of our last episode together was talking about the privacy implications of that data collection. So that’s probably a good place for us to pick this up and look at privacy through the lens of these advancements in artificial intelligence.

Tim Hayden

Definitely. And Brad, when you talk about artificial intelligence—I think we mentioned this the last time we spoke, but—it’s math, right? It is. It is really fascinating, complex math—probably not so complex for mathematicians. But it’s not easy math, right? If it was easy math, everybody would be doing it. But it’s math that’s been around for decades.

And now that it’s been digitized, it’s able to make sense or basically identify patterns within data. And what we saw most recently just over the last several months and including a very exciting week last week—I don’t think it’s coincidental at all that it happened during South by Southwest—but you know, OpenAI released to the world ChatGPT 4, and I saw on LinkedIn everyone sharing that.

I say everyone, but many people are sharing this illustration of a small circle and saying this is ChatGPT 3 and then a much larger circle and saying this is ChatGPT 4 and basically talking about the exponential number of parameters that GPT 4 would have. And, of course, out of the gate, there’s hype around the fact that it can pass a bar exam and be in the 98 percentile versus ChatGPT 3 only being in the 10th percentile.

When you look at these things, it’s certainly impressive. But where did the data come from? To your point, Brad. Where, exactly? And this is where I think there are privacy and security concerns and many more that we might get to or might not while we talk today.

What OpenAI has access to, and increasingly has more access to, through its relationship with Microsoft and other big tech firms, is essentially data that you, me, and everyone else helped populate the internet with. And many times it was—What kind of car are we shopping for? Or, what are we searching for dinner?—Or, scarily—What are my symptoms? What symptoms do I have? What do they mean that I have?

A lot of this data that was public or accessible by OpenAI is now being used to inform those “parameters” and the language model itself to do more. And Brad, I’ll toss it back to you. But I mean, that, to me, is at least questionable. I’m not going to say scary yet, but it’s certainly questionable.

Brad Weber

Yeah, I agree. There was a great Ars Technica article about the privacy implications of ChatGPT. Their focus, given the timing of the article, was on ChatGPT 3. I don’t have updated numbers for this specifically, but the scale of this was astounding. And what caught my attention were several things.

One was just the immense amount of data like you talked about. The number that I saw quoted was that ChatGPT 3 was driven by a vocabulary of 300 billion words that it accumulated.

Now, granted, there were many repeated words. It’s not saying that there are 300 billion unique words. But the assembly of those words that it has gathered from blog posts and books and articles and transcripts of webinars and podcasts like we’re sharing right now. The majority of the data appears to be publicly available.

But, in the example of books, the author in this article noted that they made a request of ChatGPT to cite a chapter from a copyrighted book. And ChatGPT happily reproduced the entire chapter and provided no copyright annotation for that.

So, that’s troubling, certainly, for authors.

Tim Hayden

It is.

Brad Weber

But you had mentioned that there could be medical data—we don’t really know what is being shared. And I think that we’ve been somewhat complicit in this in our decades of accepting privacy policies. And, more recently, disclosures about how data might be shared from one platform to partner platforms. This is an example of how that data might be used in surprising ways. As you mentioned, where we individually are, in many ways, populating ChatGPT and other similar engines like this.

And I think that’s worth further discussion.

Tim Hayden

Oh, it absolutely is. I mean, when you talk about terms and conditions and privacy policies and privacy statements that you check and say that you understand—it’s just like car dealer advertising on the 10:00 news, when they say at the very end, very fast, that “The vehicles we’re talking about do not exist on the lot. And if you come in, we reserve the right to throw $2,000 on top of you through the entire process.”

The bottom line is the legalese—the fine print—in terms of the wares that we’ve used and in terms of user agreements over the past decade-plus. That’s exactly the data that is being used now to inform and train these language models.

And kudos to OpenAI, who has come up with a very sleek, intuitive, and easy-to-use interface for us to be able to leverage—I hate to call it intelligence, but—to leverage some of the math that is able to make sense of our prompts and go find out with a certain degree of correctness. And what it is we’re looking for, the answers that we need, or the answers that we’ve asked it to provide.

You know, what’s funny—because you mentioned it earlier about the book and a chapter out of a book—is the U.S. Copyright Office, over the weekend, actually came out with a formal statement saying that they were going to revisit copyright law. And as we do, they left it open-ended. But they said, “We can tell you this much: that if you are using generative AI to produce copy or a book, you cannot claim the copyright to that content.” And I think this is where things are probably for a different webinar and a different conversation for us to have about authenticity.

But, certainly, when we think about the one-two punch of private, sensitive, or otherwise nonpublic data that has been used and has been generated in some type of content, or some type of prescriptive advice that one of these language models is providing, it gets a little spooky in this. Now, I will say it. Now, I will say scary, right?

This gets scary and a little bit spooky when we think of college students, and we think of someone sitting in their home who is trying to generate a report or type up something for their PPA and claim that they’re the author of it, right? It’s happening. Those kinds of things are happening right now.

It’s what we always talk about with “Garbage in. Garbage out.” Maybe it’s not garbage. It’s just somebody else’s. It’s somebody else’s work that’s going somewhere else. And someone else claiming that it’s their own.

Brad Weber

Yeah. I think it’s an interesting point—that we’re talking about two different aspects of copyright material. One is the copyrighted material that might be used to populate the collection of data that’s being used for generative AI. And then the other is when the AI produces potentially copyrightable material, whether it be in written form or artwork out the other end.

And so, I think this is probably one of the larger changes that have come to the Copyright Office in a long time. So it wouldn’t surprise me if this takes them a little while to digest.

Tim Hayden

Definitely, and I think, in the vein of privacy—I’ve had a few clients who’ve read up on the OpenAI API documents and have asked me, “We can simply plug ChatGPT into our CRM. We can plug it into our marketing automation system.” And the premise is to be able to generate content faster and better.

But, what’s happening there, when you make those connections with something that has that processing power—a model that has the ability to identify patterns and to supposedly help you in a utilitarian way—there’s a way, I think, that could backfire, right?

I’ve told each of those clients that asked me that question: my short answer is “No, don’t do it.” Stay using ChatGPT in the browser. If you want to have it help you with outlines or help you write a paragraph—sure. But then wordsmith it as your own. But don’t plug it into your systems. Don’t do that.

Because that’s where I think we have this opportunity to misstep and move in a direction where certain systems, for better or worse, all of a sudden know things that they shouldn’t or have access to things that they shouldn’t when it comes to information.

Brad Weber

So we’ve talked a lot in both of our episodes about the implications for personal privacy and personal data. But I think what you’re touching on now is also an important subject. There is plenty of private data in a corporate environment, so would you like to explore that a little further? Tim—I think that’s what you’re talking about, the CRM, right?

Tim Hayden

Yeah. I mean, you live in Colorado, right? And Colorado has a Colorado data privacy law, as does Utah. California has got a couple of them with CCPA and CPRA. And on January 1, when yours went into force, it did so along with Utah, Connecticut, and Virginia, as well, and a revamp of CPRA.

These laws—basically, I giggle a little bit when it’s general counsel or a CFO who’s really worried about a violation of them, and I say absolutely you should be ready for some type of litigious call or letter that will show up because someone says they unsubscribed and you’re still sending them email.

That’s been around for a while, and I think we’re going to see that only grow. But the other side of that is: when you read these laws, they’re basically a prescription for restructuring your data to be a better steward of the data that are entrusted to you. When you talk about CRM or customer data platforms, cloud management, all of these systems today, in terms of how they leverage IA/ML, are there to help you move faster internally to the organization.

And that, by all means, is proprietary in some regard. That’s proprietary information, a competitive advantage, if you will. If you know why people are buying from you or what’s working and what’s not with business operations, data can help you inform that. Especially if it’s structured and you have the right systems in place to make sense of it, to derive patterns from it, and to detect all of those things.

So, I mean, there’s the front side of that too. The interface side of it, which is where I look to you, Brad. I mean, where do apps play in all this, right? Where do apps stand in terms of privilege—of access to information? What do you see happening there, and what does that have to do with this world of privacy, especially when you’re plugging into a CRM or some other large database of information?

Brad Weber

Yeah, good question. So I think the mobile app specifically, although this certainly applies to web applications as well, oftentimes provide the window into that data that we’re talking about—the immense amount of data that’s driving ChatGPT and its competitors is petabytes of information that’s in the cloud. That certainly eclipses what you’re going to be able to carry around in your pocket anytime soon.

So what we see on the client side of this, both web and mobile, are the results often of that analysis that’s happening in the cloud or in a giant server farm that’s coming up with the information or the tips or the driving information for the sales team, or what regions they should focus on, all that sort of thing.

The numbers are being crunched oftentimes in the cloud and being presented at the phone or the browser level in order for a user to make sense of it and to be able to write from that. The phones, in particular, we talk about this a little bit in our last episode—are also participating in collecting some of that data, oftentimes through our active input and other times very possibly behind the scenes, whether it’s our location data, our health data, the number of steps that we’ve taken—that sort of information is helping to feed the machine as well.

So I think we see it in two different ways. And when you’re talking about the things that corporations and IT departments might be considering and actions that they might be taking in this regard. I think one thing is important, the distinction to make is that they don’t really need the scale of a 300 billion word or phrase vocabulary in order to get answers to their corporate questions.

I think what we see is that the questions that our clients want to answer—either about their customer behavior, about their employees, or about products—are much more focused. And so I think it’s far more practical for organizations whose sole purpose is not to answer these questions for the general public but to answer very specific questions about how to improve the performance of their operations and create their own models and manage their own data. Keeping it private within the walls of their organization to perform the analysis and maintain that competitive advantage that you were talking about.

Tim Hayden

Definitely, I think it’s all fascinating. I think the next time we talk, we probably should talk about edge computing because it’s not just the phone. It’s not just the browser in the conventional or traditional sense. It is machines themselves now that are connected, and they’re also processing millions of signals that come from wheels that go round and piston the churn and traffic that walks by, right?

I mean, all the different ways that the world’s connected now. But, you know, back to where we are right now. You know, I’m of the belief I shared this morning in an article that I read from TechCrunch that Google Glass is finally being sunset.

And we’ll defer a conversation about Google Glass and all of that for IoT and edge computing because that’s what was happening with Google Glass. There was a lot of processing happening in the frame of the glasses. Right. But the harbinger, I call it, I always thought Google Glass was just not right any time soon because we as humans don’t need a heads-up display.

And at the time I said that—this was a decade plus ago—I was a mobile strategist myself, and I was helping companies figure out more things to do utilitarian with mobile apps at the time. But at the same time, I was very cognizant that we don’t need people staring at their phones while they’re driving their cars.

We don’t need people staring at them as they’re walking down the street, and as that happened, you know, it just seemed like it wasn’t any safer to have a heads-up visualization and Google Glass. I don’t think it’s too far of a reach to say that where people are enamored right now with ChatGPT and what Microsoft has done with Bing and pulled back a little bit and then come back a little bit more. What Google’s doing right? And what they say they can do and what they will do more of.

We could probably talk about them specifically. But do we, as humans, absolutely need to have a Siri or an Alexa or now something that is much deeper and much more capable to constantly be the thing that we go to, to answer a question for us? Especially when we are able-bodied to get around and to experience life ourselves and to know where to query information when we need it, that which is necessary. Right?

I understand if you have encyclopedic needs, you’re going to have to go to the internet anyway. But I just wonder—what is the shelf life going to be? What is the run? It looks really good right now. It’s new; it’s shiny. But I don’t know, Brad—what do you think will be in six months, nine months from now, with ChatGPT or any other of these other language models?

Brad Weber

I think we’re definitely going to see privacy at the forefront of that conversation. I think right now, there is, as you said, such excitement around the tool, and people are learning about what’s possible. What’s interesting is that it isn’t new. You talk about being around for decades. I mean, over 75 years, you can go back into some early research papers, like in the 1940s, talking about the possibilities of this.

So the idea isn’t new. And the military—US and others—have been advancing artificial intelligence for a long time. And in public view, I remember early over 20 years ago, meeting teams there were participating in the DARPA’s challenge to create autonomous vehicles that were able to complete a course. It was unknown. And ahead of time, it wasn’t mapped out.

They were given a challenge on the day for their vehicle to be able to figure out how to get from point A to point B over really challenging terrain. So those problems have been worked on, and you know what? What that evolves into is Tesla and others working on their self-driving car capabilities and, you know, just other conveniences that are popping up in our everyday lives that we talked about in the last session as well.

I think it’s exciting. I think there are a lot of things we’ve talked about that, as you noted, are a little scary. Maybe we’re going a little too far to start. But what’s really exciting is that ChatGPT, in particular, has really thrown the notion of artificial intelligence and ML into the conversations of the general public. And I think that is an important step in the process for us to continue to advance and for that to expand in.

So I talked about privacy, and you said what’s coming in the next six months? One thing to note about ChatGPT, it’s entirely text-based at this point. It doesn’t do anything with audio. It doesn’t do anything with video. There are a lot of other areas where we’re most certainly going to see this start to permeate. But this is an important “big bang” to get everyone’s attention at this point. I think it’s an exciting part of the process.

Tim Hayden

Sure, I absolutely agree. And to that point about privacy, we mentioned the US Copyright Office, right? You have the US Copyright Office, and you have several states that have data privacy laws in place. You have hungry attorneys out there that will make something out of nothing and make something out of something that has substance right now in terms of how data is being used.

A client of ours here at Brain+Trust said to me about two weeks ago, “You know, people say that data is the new oil, and certainly, data has value, but data is actually dynamite. At the end of the day, you have to handle it with care.” And that’s where this is going with privacy. It’s where it’s going with security.

And I think we’re going to see it to be most fascinating. When we talk about three or four months from now or nine months from now, in 2023, there are a number of economic economists and economic analysts that are forecasting that there will be some type of economic recession. That there will be corners of the economy that will shrink, and there will be others that just continue to grow and expand.

AI is certainly one of those that I think we’re just going to watch because it does hold that promise that technology has always given us to do more with less. And it’s proving it day in and day out. But at the end of the day, when you think about patterns, right? We’re coming off the end of a pandemic.

We’re just now getting the emergency label removed. And with that, things weren’t normal. This is not a conversation about normal, but things are just not linear. The way that they were, and they won’t be. As we continue to have innovation move at the breakneck pace that it is. So when it comes to predictive analytics, which is what I think I heard, and I forgot who it was, but I saw some people quoting speakers at South by Southwest that said, you know, all ChatGPT is—is fantastic statistical intelligence, right?

It’s able to detect patterns. It’s able to identify things based on a problem. But it’s that statistical intelligence. Right. And, you know, it’s something that I see if we look in the future—how smart machines are going to be able to really detect our patterns if the world is changing at the pace that it is.

Brad Weber

It’s interesting, and I think it’s worth noting before we part or close out on this subject that it’s not all doom and gloom. I mean, it’s important for organizations to think about treating data. I like your references. Data is dynamite, but be very careful with the data they collect. I think it’s important for consumers, the general public, to be aware of the data that they’re sharing and maybe pay a little more attention to that than we have in the past.

But there’s really a lot of benefit from this as well. I mean, the science, the technology that we’re talking about is helping with medical advances, you know, being able to review medical imagery and make a diagnosis perhaps more effectively than humans would. Or as an initial screening of that is really helpful. I have just a silly example of where some artificial intelligence has gotten my attention in a positive way. On the road recently, I was taking a couple of long car trips in rental cars and took advantage of the cruise control feature.

And although I’m sure plenty of people watching this will be familiar with this, this was new to me: As I approach another vehicle, I’m used to my “cruise control experience of the past” as tapping on the brake very inconveniently. And then once I get around the vehicle that I was approaching, I had to reset my cruise control. And it was a pain.

It was a small pain, but a pain. And I am now pleased to find that multiple vehicle manufacturers have an automatic detection feature associated with that cruise control, so I can set a certain speed as I approach another vehicle, the car will automatically start cruising and slowing down on its own. If I just change lanes, it recognizes that there’s no longer vehicle in front of me, and it’ll automatically resume to the speed that I had.

That is convenient. It also requires absolutely no personal data from me. Now, there is a ton of data that went into that feature to determine what an appropriate distance is, taking into account the speed of my vehicle and trying to anticipate the speed of the vehicle in front of me. But it doesn’t have to know anything about Brad in order for that to be effective.

And I think one of the messages for brands who are looking to add that kind of delightful customer experience is to, again, be critical about the data that you’re collecting and scrutinizing each individual. But is that absolutely necessary for us to know “Brad” individually in order to provide this capability that’s going to add some time to his life or somehow simplify the process of his day-to-day moving around?

So some thoughts for you to reflect on there, too, Tim.

Tim Hayden

I’m with you. Adaptive cruise control and lane assist, and things like that are in place to provide us with a safer life. You know, Apple’s handoff: With just being able to intelligently know that I was listening to a Spotify playlist or a podcast in the car but when I walk in the house and it connects to the wifi, and all of a sudden, Sonos comes on, and it plays there as well.

I mean, this is, by all means, really great coding. It is intelligence and with the respect to “if this, then, that” type of logic that is put in the systems now. And this is what I think is fascinating—is that there’s a balancing act. If we want to keep on this path of privacy—we have to understand that opting in and allowing that type of data, which you just described, which isn’t really that much about me, the individual, but me and my disposition in terms of geographic location, in terms of velocity and direction, in terms of the time of day, in terms of maybe the thing or where I was, you know, 5 minutes before or where I happen to be headed next. The fascinating thing is a lot of this is the way that the advertising world works right now. That we can start to deliver programmatically the best message to the right person at the right time.

So, a lot of this has already been in play for a while, and I wonder how we’re going to see systems and language models start to inform that as well. If you do have your data structured right and all of your systems, I’m thinking of a customer data platform now where you have 100% of your customer-related technologies that are connected via a real-time API.

Most of them, not all would be, but most include both for the real-time API. You’ve got your finger on the pulse of customer behavior all the time. Do you see the language model coming in to automate how that would then go inform a demand-side platform with advertising or even better things that happen within a store and apps that inform people that are working on the floor about customers or certain kinds of customers that are headed that way?

And maybe we need to change our script. Maybe we need to move the sweaters to the front of the store—I don’t know. You know how I think, Brad. It’s all connected. And it’s fun to play with in a browser. When are we going to put this thing to real work to help us with that data we have today and to make sure we’re doing so very responsibly?

Brad Weber

Yeah. And I really think that’s the theme of what we’ve been talking about. I think that’s a good summation. Are there any other key points if we were to bullet, any takeaways that you think are important to revisit here?

Tim Hayden

You know, what we stated before is to go out and experiment with this, right? If you don’t have an OpenAI account, it’s free. There are things that you can and can’t do with that free account, and think twice before you start to give them $20 a month or any more than that.

Right. And certainly, I would say talk to your attorneys and go find legal advice. There’s some of it online, but talk to your personal business attorney who, if it’s not them, they’ll have somebody in the practice or somebody they know who’s been looking at these matters. To understand—is it the smart thing to start to talk to OpenAI about an API connection?

Is that really a prudent thing to do right now?

And I’d say lastly is what you and I both know. With the data you have today, if you want to employ automation and if you want to really have intelligence that can inform what you do next with your business—you need to look at something like a customer data platform; you need to look at companies like Snowflake that help you organize information that’s in the cloud. That’s where we are today, folks. And those are the two things. Don’t jump off the deep end for the shiny water when what you’re sitting on right now is already pretty much what you would need to innovate and to do some pretty magical things.

Brad Weber

Remember that there are tools available for organizations to realize the benefits of AI that we’ve been talking about within their own walls of the organization and not necessarily having to jump on (like you said) the latest “shiny solution.” There are more focused tools—more focused solutions that will help companies get to the answers that they need to improve those internal operations. I think that’s great; thank you, Tim.

And we alluded to it in this session that you had mentioned as well that there’s a lot going on with the Internet of Things, connected devices, and edge computing. I know you and your customers have a lot of experience with connected vehicles. And so it seems like that would be a great thing for us to turn our attention to next time.

Tim Hayden

Definitely, we alluded to it, but when you get into the adaptive cruise control and the lane assist, there’s a lot of play there. There’s actually some work we’re doing in that space that I’m looking forward to uncovering with you.

Brad Weber

Excellent. Terrific. Well, we will look forward to doing this again next month. It’s good to see you, Tim. Take care.

Tim Hayden

Thanks, Brad.

Tune in for expert insights.

Join our live webinars featuring the CEOs of Brain+Trust and InspiringApps. Stay ahead of the app development and data curve, and be a part of the conversation with our experts.
Recent Posts

AI Artificial Intelligence

AI Questions To Ask Before Selecting a Digital Product Partner

Selecting an AI-savvy digital product partner is daunting. Choose wrong, and you risk ethical missteps, poor user experience, or ineffective AI integration. The consequences? Wasted resources, damaged reputation, and missed opportunities. Don’t leave your AI product to chance. Our guide offers crucial questions to ask, helping you identify a partner who prioritizes responsible AI, user-centric design, and ethical considerations.   AI Integration Essentials: Questions To Ask Your Developer AI integration is reshaping digital products, but not all approaches are equal. Understanding a partner’s AI expertise is crucial for project success. These questions will reveal their capability to implement AI responsibly and effectively.  How do you integrate AI while maintaining privacy and security? When selecting a digital product partner for AI integration, look for a company that prioritizes: On-device processing where possible to keep user data secure. Clear protocols for handling data when server-side processing is needed. Obtaining explicit user consent for data sharing with third-party AI services. Transparency with users about AI implementation. A strong partner should be able to articulate their approach clearly while staying one step ahead of evolving AI technologies and privacy regulations.  “Real people—real intelligence. The right team will tap into AI’s potential the right way. That means prioritizing user privacy and ethical considerations as they do for every implementation.”KRISTIN MOLINA, MARKETING MANAGER | INSPIRINGAPPS What’s your experience in enhancing user experiences with AI? An ideal partner will carefully integrate AI features to enhance, rather than complicate, user experience. Look for: Examples of how they’ve integrated AI capabilities while strongly emphasizing user privacy and data security. Instances where they’ve used AI to save time, improve quality, or analyze data in new ways that directly benefit the user experience. A balanced approach that taps into AI’s potential while ensuring the technology doesn’t overwhelm or confuse users. Transparency in how AI is implemented ensures users understand when and how AI is used to enhance their experience. How do you ensure ethical AI practices? Look for a partner that: Actively works to identify and mitigate potential biases in AI systems. Advocates for human oversight in AI processes. Complies with AI-related regulations. Regularly audits their AI use. Helps clients to maintain responsible AI use. Considers AI’s long-term impacts and sustainability. Continuously learns and adapts to new developments in AI ethics. Can you provide examples of AI solving real business challenges? A responsible partner should provide examples of AI implementations that serve specific business purposes rather than using AI for its own sake. Look for cases where AI has been used to save time, improve quality, or analyze data in new ways. Examples should focus on practical, ethical, and user-centric AI applications that directly address business challenges while aligning with a philosophy of responsible AI integration. How do you balance AI automation with human oversight? Look for a partner who emphasizes the importance of human judgment alongside AI capabilities. They should describe a process where AI augments human decision-making rather than replacing it entirely.   Responsible AI Implementation: The Non-Negotiables Responsible AI implementation should be non-negotiable. Use these questions to ensure your potential partner’s AI approach is transparent, unbiased, and aligned with the highest standards in responsible AI development. How do you ensure transparency in AI features? Look for a commitment to clear communication about AI use in the product, possibly including in-app explanations of how AI is used and how it affects user experience. How do you prevent bias in AI algorithms? Look for a detailed approach to identifying and mitigating bias, including diverse data sets, regular testing, and a willingness to adjust algorithms when bias is detected. How do you approach data minimization? Ask how the partner minimizes data use—do they process data directly on devices or use techniques like anonymization? What’s your process for ongoing AI system improvements? Look for a systematic approach to evaluating AI performance, gathering user feedback, and iteratively improving AI features over time.   AI-Enhanced User Experiences: How To Create More Engaging Products When implemented thoughtfully, AI can enhance user experiences. The following questions will help you understand how a potential partner leverages AI to create more engaging, personalized, and accessible digital products. How do you personalize digital products with AI? Look for examples of how AI can tailor content, recommendations, or interfaces based on user behavior and preferences while respecting privacy. How does AI improve accessibility? For example, AI can enhance features like voice recognition, text-to-speech, or adaptive interfaces to make apps more accessible to all users. How do you balance automation with human touch in user interactions? Look for an understanding of when AI is appropriate, when human interaction is preferable, and what strategies to integrate both seamlessly. Can you provide examples of how AI has enhanced user engagement in your previous projects? Ask for concrete case studies demonstrating measurable improvements in user engagement through AI implementation. Remember, the right partner should have technical expertise and a clear understanding of AI implementation’s ethical implications and responsibilities.   Our Approach to AI: InspiringApps Philosophy At InspiringApps, we don’t just integrate AI—we do it responsibly. Our AI philosophy: Prioritizes user needs and privacy Solves real problems without compromising ethical standards Continuous innovation in AI with a focus on responsibility and transparency Let’s shape the future of AI together.  

18 days ago

AI Artificial Intelligence

Apple Intelligence: Empowering Businesses of All Sizes

Image Source: Apple Newsroom Apple announced its “AI for the rest of us” plan. But what does that mean for the future of digital products in enterprise, mid- and small-sized businesses? In this article, we’ll explore the implications of Apple’s AI strategy and share key takeaways for business leaders navigating the evolving landscape of AI-driven innovation. But first—a little “AI housekeeping.” A Working Definition of AI Artificial Intelligence (AI) focuses on creating smart machines that can learn from data, identify patterns, and perform tasks typically requiring human intelligence. Unlike traditional software solutions that rely on explicit programming and predefined rules, such as “if the customer’s age is greater than X, do this” or “if the bank balance is greater than Y, do that,” AI systems can make decisions without being explicitly programmed for every scenario. Recommendations for Businesses A little over a year ago, Brad Weber, CEO of InspiringApps, spoke with Tim Hayden about AI and more, as the above video captures (for the full conversation, see the playlist AI & Data Privacy: InspiringApps & Brain+Trust webinars). Considering the latest announcements at Apple WWDC 2024, we followed up with Brad to capture more insights and business recommendations. We found that Apple’s approach to intelligence mirrors our own in many ways: seamless integration, user-focused design, privacy, and business efficacy. 1. Focus on enhanced user experiences. Recommendation: Through seamless integration, focus on delivering tangible benefits to users that take advantage of Apple’s AI-driven user experiences, such as personalized recommendations and predictive features. “AI has enhanced the Apple user experience for many years. Has your phone predicted your commute time to work when you are a few blocks away from home? Has your watch calculated the total yards of your swim workout and broken down the distance by stroke? Both features (and many more) are driven by AI, but Apple treats that as an implementation detail users don’t need to know about. They should just experience the surprise and delight.”—BRAD WEBER, INSPIRINGAPPS CEO Most people have experienced AI without realizing it. “Hey Siri, read my messages” is a prevalent example of AI in everyday life. With Apple adding more ways to engage with app functionality through Siri and system-level features, properly defining and configuring App Intents may become a new strategy for businesses to ensure their in-app content and services are discoverable and intelligently integrated into users’ queries and workflows. For example, the improved typing interface with Siri could make Apple devices and apps more accessible for users with certain disabilities or situational impairments that make voice input challenging. If these intelligent features prove useful for accessibility, businesses may be able to reach and serve a broader audience. Businesses using messaging platforms for customer engagement could also leverage the new intelligent writing and summarization tools to streamline communication and quickly synthesize key points from lengthy customer inquiries or feedback, improving response times and quality. 2. Prioritize user privacy. Recommendation: Be transparent about data collection and usage, give users control over their data, and take every precaution to protect user information. AI enables computers to tackle complex problems and operate in the “gray areas,” where they can develop their own solutions based on the given goals or outcomes. To achieve this, AI relies on extensive data to effectively train models and make precise predictions or decisions, such as identifying objects in images or determining the optimal course of action in a given situation. This reliance on large datasets for training AI models raises important questions about data privacy and the implications of using personal information in developing AI systems. “With the new generative AI features in iOS 18, Apple will continue to prefer the privacy of on-device processing. When server capabilities are required, limited data is sent to Apple, and none of it is stored beyond the current ‘session.’ And if Chat GPT (and other engines/providers in the future) are beneficial, users will explicitly approve each data share. It is highly unlikely that any user-identifiable data will be sent to OpenAI as part of those interactions unless the user specifically opts in with their OpenAI account details.”—BRAD WEBER, INSPIRINGAPPS CEO In short, Apple’s commitment to privacy is a key differentiator, and businesses should follow their lead. 3. Consider Swift 6 adoption. Recommendation: Consider adopting Swift 6 for your development projects to reach a wider audience and streamline your development processes. “I’m excited to see Apple’s Swift 6 push on Windows and Linux, including improved Swift language support in Microsoft’s Visual Studio Code editor. Our team enjoys Swift development, and the language’s safety features have definitely reduced incidents of crashes on Apple devices. I look forward to those same benefits on other platforms.”—BRAD WEBER, INSPIRINGAPPS CEO 4. Stay adaptable. Recommendation: Businesses should explore the full spectrum of AI and machine learning technologies to find the most effective solutions for their specific needs. From predictive analytics and recommendation engines to computer vision and natural language processing, there are numerous ways to harness intelligence to benefit users and drive business growth. “It’s important to remember that generative AI is not the only form of AI. Although it has captured headlines for the last year or two, there are loads of other ways to benefit users with the thoughtful use of AI and ML.”—BRAD WEBER, INSPIRINGAPPS CEO Advanced predictive analytics can provide valuable insights into future trends and user behaviors. Intelligence can use data analysis to predict popular features, design elements, and evolving user needs. This foresight can inform decisions, helping to create digital products that meet current user expectations and are well-positioned to adapt to future demands. Smart Business Moves Apple’s advancements present both opportunities and challenges for businesses of all sizes. By focusing on seamless integration, user privacy, and delivering tangible benefits through smart algorithms and machine learning, businesses can position themselves for success. The key is to approach all development decisions with a user-centric mindset, focusing on how technologies can solve problems, streamline processes, and create meaningful benefits for your target audience. In other words, always prioritize creating value for your users. Further Reading & Resources Gartner: Learn To Build an AI Strategy for Your Business Apple: Machine Learning Research InspiringApps & Brain+Trust: AI & Data Privacy Webinar Recap

3 months ago

Blog Categories
App Marketing
Business & Strategy
Client Projects
Culture & Innovation
Custom Solutions
Digital Product Design
Digital Product Development
Digital Products
Events
InspiringApps News
Mobile Industry
Webinars