Episode 31 Beyond the Generative AI Hype

Download MP3

[00:00:03.610] - Hessie Jones

So I have a question. Are you one of the billion users using ChatGPT today?

[00:00:12.290] - Hessie Jones

You are? I am, too! Probably not as much as you, but for people in our audience, what other AI-generated tools are you using? Are you using DALL-E? Are you using Midjourney? Are you using Stability and their dream studio AI? I've tried that one, by the way. So I'm going to give you some stats. Since November 2022, when ChatGPT was released, adoption actually skyrocketed to almost 1.16 billion users. So consider this. Between February and March this year, it actually grew 55%. So this year alone, ChatGPT is actually expected to generate almost $200 million in revenue. And by 2024, it's going to be $1 billion. So what's surprising is that 88% of the traffic to ChatGPT is direct. So it's considered this type of search engine and it is now ahead of Bing and DuckDuckGo, generating almost 1.76 billion visits worldwide in the month of May sorry, in the month of April. And that was up 12% from March. So suffice it to say, it's actually becoming a strong contender to Google. So welcome to Tech Uncensored.

[00:01:35.700] - Hessie Jones

My name is Hessie Jones, and in the advent of the publicly released ChatGPT and any other of these generative AI solutions, what users found was that productivity, from their perspective, went through the roof. ChatGPT could understand. It could answer questions beyond a normal search query. It could be a great source to brainstorm ideas, it could explain some complex things, it could draft letters, it could summarize long documents, and it could even write code and debug code. So what we realized today is that there is this growing dependence on generative AI tools, but the speed to which it's actually being used and the likelihood that its output could be trusted and used means that we should actually kind of take a step back. And before we believe that this technology is actually good enough for everyday use, So I'm so glad to have Dr. Augustine Fou join me today. Welcome, Augustine. And he is a digital marketing expert that has over 30 years of experience and he's been tracking and reporting on ad fraud. He's also the founder of FouAnalytics. And Augustine and I have converged many times when it comes to topics like fake news, AI and media buys, data privacy, as well as now generative AI.

[00:03:05.350] - Hessie Jones

So we're going to be diving deeper into this topic, why it's taken the world by storm, why you should also be cautious and interrogate the efficacy of these kinds of tools, and more importantly, how you should actually be leveraging the technology on a day to day basis. So let's get started. Thanks, Augustine, for joining us. So you've been in the digital space for some time and you've seen the advent of digital advertising before Google and Meta became the forces that they are today. So in your view, what has actually stood out in the last two decades when it comes to how I would say digital has been truly integrated and transformed in all our lives?

[00:03:57.530] - Dr Augustine Foo

I remember back in the day, like in the mid-90s, we were still trying to convince clients they needed a website. So a lot has happened in the last 30 years or so. These days, kids are always staring at their phones. They can't even part from their phones. So we now have constant and instant access to information. So we've seen so many industries be transformed. We see people's lives be transformed, and they're so dependent on the information they get online. But in doing so, we've kind of moved away from news sources. Like, people used to get their news on TV, but now they're instantly getting alerts when breaking news happens. The problem is it's much harder to tell what's actually truthful versus not because it all looks real, right? And so it also plays into the hands of the bad guys where they can actually be spreading false information. And now with these generative AI tools that can generate images and sounds, voices, and even video, it's made it easier. It's kind of democratized the tools to the point where even consumers can generate or impersonate someone else, right? So I think some of those dangers we've never really had to deal with, we're going to have to deal with.

[00:05:18.360] - Dr Augustine Foo

So I've seen a lot of different things, both good and bad. I think today we're kind of here to talk about the balance of both, right? Because I think all these new technologies, just like any tool, like a hammer can be used to build a house or it can be used to murder someone. So I think we have to take that approach and be more pragmatic and understand the limitations or the risks and the boundaries of these tools so that we can actually use it as part of our daily lives as well.

[00:05:46.480] - Hessie Jones

Thank you. So let's start with actually defining some of these things. So let's start from the perspective of the technology itself. Can you give us, I guess, a layman's definition of GPT - generative pre-trained transformers?

[00:06:04.700] - Dr Augustine Foo

Yes. So I think we're going to talk more specifically about generative AI. , right. And there's different kinds. ChatGPT most people are familiar with in terms of creating text or generating text. There's also things like DALL-E, Stable Diffusion and Midjourney for creating images. I think there's a new crop of video generators as well. But generative AI means the algorithms are going to generate some new things and it's essentially remixing some old things. So for ChatGPT, they fed it the entire Internet worth of content up through 2022. And so it's a bunch of text, and it's essentially creating new remixes of the text based on what you ask it, right? So the prompt that you ask it, but it's really because it's a remix, you have to think of the limitations of whatever text was fed into it as training data. And then similarly for the images, which we may not get into too much today, but it's a remix of other images that has been fed. And so in just using something like Midjourney, sometimes you see artifacts like partial watermarks in the images because it might have used images that were watermarked or copyrighted by somebody else.

[00:07:23.170] - Dr Augustine Foo

But again, we don't really know where those source images came from, who owns it, and even what source images were used in the remix. So that kind of gets into some of the potential risk of it because you don't know the origin of the data or the input that was used.

[00:07:40.530] - Hessie Jones

Let's talk about how it's evolved, because not long ago there was this AI machine learning hype and everybody thought that this was going to go a lot faster than it has. And then last year, suddenly it just took off. From the perspective of just defining how it differs from what we're used to. What is the actual change?

[00:08:07.530] - Dr Augustine Foo

It's really not a technical change, right? So from my perspective, I've seen algorithms of different generations for many, many years. But when you talk about machine learning and AI artificial intelligence, it has been previously relegated to academia, right? Only certain people, computer scientists or whoever who were deep into this could use it because it was simply too hard to use. And really the big change that happened at the end of last year was that it became easy enough to use for consumers to play around with it. So when ChatGPT kind of opened its doors and allowed people to literally chat with it, just typing a prompt, that was the kind of watershed event that kind of made it spill into the public sector in a way that it hadn't for the past ten years or maybe going back even further past 20 years. So various forms of this, our algorithms that did similar work to what we now call AI, we just didn't call it AI or ML back then, but it's been around. I think the main thing is that now consumers are able to use it. And that's why there's so many examples now, very funny examples, some good examples, some bad examples, but consumers are playing with it, and because of all the hype around it, it just kind of seems like it's a new thing.

[00:09:33.740] - Dr Augustine Foo

But algorithms have obviously been around for a long time. Machine learning has been around.

[00:09:38.650] - Hessie Jones

Okay, so I think you're right from that perspective, because I think for the most part, if disruptions happened even before ChatGPT, a lot of the abilities have been enabled from a developer perspective. So now unleashing it and the fact that consumers can now consume it in a way that has never been done before, it actually puts, I guess, the developer community at odds a little bit, right? Because people can start creating things on their own and creating code from normal speech, right? So let's talk about some of the good stuff, like why are people actually flocking here? So let's talk about some examples that you found in, I guess, your journey of trying some of this technology and what's been really good about it.

[00:10:37.930] - Dr Augustine Foo

It seems like it's magical because you can just type a prompt like ask it, write a 500 word article on this topic and it'll actually write it in 30 seconds, right? So what would have taken a human hours and hours of labor to do is now done in 30 seconds. And in fact, most of the output is readable English sentences. So when you're first seeing this and you've never seen this before, it literally seems magical. And so I think students have played around with it to do their homework for them and that kind of stuff. So we're seeing new use cases. But again, when you read the articles a little bit more closely so what I've done is I've run some experiments. I write about ad fraud and digital marketing a lot. And when I ask it to write about digital marketing and topics that I'm very familiar with, you'll start to see it's missing the nuances, right? So it is full English sentences and it has the right keywords in it. But when you actually read it and try to think about the meaning of what it said, you start to realize you're at the boundaries of what it can actually do.

[00:11:48.360] - Dr Augustine Foo

And you start to realize that the AI doesn't actually understand anything that's writing. So it's just remixing the sentences in a way that is readable. Now it's really good at "make me a list". So make me a list of 30 birthday ideas, or make me a list of 50 questions people have about digital ad fraud. That's great, right? It just saves so much time. I use it in my daily workflow just to do that kind of stuff. I use a term called divergent thinking. Some people might call it brainstorming. It's like, oh, you can sit in a room with a whole bunch of other people and brainstorm a bunch of ideas. But if you ask the AI, it can do it in 3 seconds. So it's really magical that way. The flip side is what I'm going to call convergent thinking. So the best example to illustrate what that means is arriving at a specific or precise answer. So the best way to illustrate that is math, right? Two plus two actually equals four. There's no other answer that would satisfy that. So when you have to arrive or converge at a specific answer, it gets much harder to do.

[00:12:57.160] - Dr Augustine Foo

So in the experiments that I've run with ChatGPT, when you ask it, for example, talk about my background or list out my background. It'll list things like I went to some university that I never went to or may not even be a real university. And it'll say that I won the Ad Fraud Researcher of the Year award when that doesn't even exist. Okay? So in certain cases, when you need it to be precise, it's not very precise, but when you needed to write something like marketing copy, it's awesome. I've written some of these examples. Write me 300 words about this single origin coffee from the rainforest of Brazil or something, and it does a great job. The copy reads beautifully. So those are just to illustrate what you can do with it, but also realize the boundaries or the limitations of.

[00:13:49.240] - Hessie Jones

What it can actually do, actually, in a lot of respects, because I also come from marketing and writers are a dime a dozen. Oh, I wouldn't say a dime a dozen. I shouldn't say that because I'm a writer. But it is a skill, it's a craft. It's just like being an artist. And to be able to put a prompt into ChatGPT to say, give me the most, let's say, compelling headlines and give me four versions of it for it to do that and not you could save hours by doing that. You don't even really have to a B test in a way, you know what I mean? Because the best ones could be already there and you don't have to worry about bringing a team of writers to actually come up with some of these, know?

[00:14:40.810] - Dr Augustine Foo

Yeah, I mean, you've heard the news. I think there's a writer strike in Hollywood and all that, right? I think just like any transition period when new technologies become available, some people are scared of it. And I think I would recommend or encourage them to embrace it as a tool, right? It's not going to replace human intellect or human intuition and nuance, but use it as a tool to generate a lot of ideas and then the humans can actually refine and build on top of that. So just like I said earlier, it would take ChatGPT 30 seconds to come up with 30 questions that people ask about digital ad fraud, and then I go through and start answering those. So it really helps my workflow. So I think for writers, you can also think about the change in the business model, right? So for artists and writers in the past, they've been paid for their time in writing something or in drawing the piece of art that the buyer buys, right? But now, if the tools can generate the entire piece of art in 3 seconds, the role of the human is actually now in coming up with the idea to begin with, right?

[00:15:54.040] - Dr Augustine Foo

To create the prompt, ask it for something really cool or creative, but not have to spend so much time on making it. So I think the revenue models and how artists and writers should think about this should also change because it's not so much that they would charge for the hours and hours they spend making it, but they really should be charging for the creativity that a very seasoned or experienced artist or writer can actually create. Something that's very useful and meaningful, but they don't have to spend a lot of time literally writing the words on a page or using the paintbrush themselves.

[00:16:30.770] - Hessie Jones

Right. I'm going to add another analogy to that. Remember when digital first emerged and the one place where everybody made money was in developing websites? I mean, if you didn't know HTML, if you didn't know CSS, if you knew that, then you can make loads of money. And then over time, suddenly it became a thing where anybody could do it.

[00:16:55.810] - Dr Augustine Foo

There's even algorithms, right? That made the whole website for you. In fact, they do it now. Yeah, you've heard the term prompt engineering, right? So that might be a fancy term that didn't exist last year, but in my research, just playing around with it, I can now actually make the AI voices emphasize certain words, add pauses here and there. It's subtle, but I've learned that through experimentation, right? So I can now tune the voices and the voiceovers better than if I had no experience doing it. And then I can also tune the Midjourney prompts to get more of what I want in the image that I'm trying to create. So I think there's going to be certain people who have spent the time in experimenting that have the experience leverage that. So there's others that don't have that experience. They will need your help to create images more efficiently in Midjourney or text more efficiently in ChatGPT. So there is an opportunity there's new opportunities for everyone.

[00:17:58.250] - Hessie Jones

Okay, so let's talk about my favorite personally identifiable information. I mean, ChatGPT has said that they don't include personal information in their data sources. They've said it comes from databases like Wikipedia, things that are publicly available. How can we trust information, or should we trust the information that that spit out from there if we don't really know what's included? I think a lot of the information they provided was in brushstrokes about what databases, et cetera. But there have been instances where not only personal information, but IP data has actually come out of it as well.

[00:18:49.670] - Dr Augustine Foo

Of course, I mean, they obviously will make those claims, but again, even they don't know. The point is, it's a black box. Nobody knows, right? So yes, they didn't explicitly input data from databases like from an insurance company or from a hospital or medical information, that kind of stuff. And then furthermore, they would not have deliberately ingested database with your name and phone number, email address, and home address and that kind of stuff. They could have, right, because all that data is also public. You can just buy it from Experian or Axiom or somebody like that. But a lot of times people on their websites will have their phone number, contact information, home address, and everything, especially small business owners on their Shopify sites. They put their own phone number on there, their own email address on there. So if all of that is just content that's scraped, they actually don't know and they don't even have a way to prove whether it's actually in there or not in there. That's the problem. So when you talk about PII or personally identifiable information, a lot of times in my research I've seen ad tech companies say, oh, we don't have their name, therefore we don't have PII.

[00:19:58.370] - Dr Augustine Foo

But it's almost like one step away from easily reidentifying the think it comes down to a matter of definition, and I think a lot of the ad tech companies are going to just use the lawyer's definition, oh, we technically don't have their name, therefore we don't have PII, but they don't even know themselves. So I think there is that danger in there and we have not tested any of this stuff in court. So if an advertiser uses it, what are their actual liability and risk in using that? So again, all of that has yet to be tested and will need to be tested.

[00:20:36.020] - Hessie Jones

So we understand well, and I think the other part of it is that we're talking ChatGPT, we're talking Bard potentially, as well as Meta has their Llama, but there's a lot more open source that's out there as well. And so we don't know how they're gathering their data. I'm sure they have, and I spoke to the founder of Lyon and they use bots to clip information from the web and that's anything that's obviously on the open web and they're dealing with their own, I guess, potential legal fallouts because of the information that's scraped. But if information comes from everywhere, then you would think, and this is a nice question, you would think there would be a little bit more representation in the data, but yet we do have a bias problem. So how do you respond to that?

[00:21:37.070] - Dr Augustine Foo

The it depends on the data that's fed into the model, right? So I'll use a specific example. Bloomberg built their own Bloomberg GPT using financial information. And that's a great example of where we see a vertical implementation of it. So you go to Bloomberg for financial information. Now you can ask it about financial questions, and that's a great example. But again, we don't know what was put in. So if we're ingesting stuff from the open Internet, there's going to be stuff that is biased, that's part of but again, these large language models are just remixing things, so the bias can actually still seep through. And furthermore, if you think about a lot of the company owners in the past, it's been men and males, so there may not have been a lot of content, written text content that can be ingested by these models. So whatever biases were there are going to be just repeated by the model again, because it doesn't understand what it's writing. It's simply remixing and regurgitating words.

[00:22:45.510] - Hessie Jones

Okay, so let's get into that because I think you're right about the verticalization of this technology. I think we're already seeing it. If you use the ChatGPT API against, let's say, building your own model, we're already seeing a bunch of different plugins that are now available, like Zillow, where you could actually do a home search and put in your preferences, your location, bedrooms, number of bedrooms, et cetera. And it will actually go into their specific database and find the right home for you. We've seen it with PortfolioPilot. It does the same thing to streamline some of the investment processes. So when you talk about Bloomberg, and because they already have the advantage of having these databases, these financial databases, and being able to already, I guess, predict markets in the way that they have in the past, by superpowering them with this technology, they will be leaps and bounds ahead of everyone else, correct?

[00:23:53.130] - Dr Augustine Foo

Yeah. I mean, think of this as a modern version of those chatbots that were on websites. So sometimes if you need customer service, you can click that little chat button on the lower right of the screen. So those have been around for a number of years. Those just didn't work very well. But now with large language models, the AI on the other side can actually respond with complete English sentences. So it seems very magical now, but you can think of earlier versions of those algorithms have been around for a long time, those chatbots. But now, you know, when we talk about verticalization, Bloomberg has the domain expertise in the financial information sector. So they are the best company to offer a Bloomberg GPT where you can ask it financial information and then like you said, Zillow does it for real estate. So I think the role of humans is going to evolve beyond the task of creating the content to more of the task of curating content and making sure it's accurate, right? So first of all, feed in the right information into the models, and then even after it produces the 500-word article for you, the human can then spend the time editing it curating, and making sure it's accurate.

[00:25:08.550] - Dr Augustine Foo

That's a different task. And the human is uniquely able to do that and the algorithms can't because it doesn't know what it wrote. It just regurgitated some English sentences. So I think, again, to reemphasize the nature of work is going to change and the humans who embrace it and take advantage of it will get ahead. The people who fear it and fight it are going to fall behind.

[00:25:31.610] - Hessie Jones

Okay, I want to direct you to an article that I just read in the last week. Sam Altman, who is the CEO of OpenAI, came out and said that ChatGPT has been played out. Basically, any future advances will actually need to come from elsewhere because the model itself is going to have diminishing returns as we try to scale. A lot of that is based on physical limitations and the data centers. Obviously the cost is going to be, he said, way over $100 million. What is your perspective on this as a lot of companies are already leveraging the technology to actually build out models and solutions for themselves?

[00:26:26.410] - Dr Augustine Foo

I agree with Sam, and it's basically because the OpenAI ChatGPT was already trained on the entire Internet's worth of data, right? And they said up to 2022 or something. So if you add 2023 to it, it's only a little bit more information, but you have to retrain it. So it's going to cost a lot of processing power and GPU time and all that kind of stuff. And in fact, the output of that may not even be noticeably different or better. So that's what he means by that. But on the flip side, you see Stanford researchers spend $600 to train alpaca their own, right? So it's almost as good as ChatGPT. Most people won't even notice it's not as good, or they won't notice the nuances. So on the flip side, it can be extremely low cost. So I've also seen examples of where it is possible for new teams to create their own GPT based on training data. And you don't have to throw the entire kitchen sink of information at it. You don't have to put in the entire Internet's worth of content. You can put in specific vertical content. So like I said before, Bloomberg did that with financial information.

[00:27:42.630] - Dr Augustine Foo

Imagine if an oncologist or Mayo Clinic or a specific hospital system who specializes in, say, oncology or cancer research put in that specific content and made their own. It is possible. And that could actually be better than some generalized AI or ChatGPT that's trying to be everything to everyone. So I think he also said that we've already moved into the era of more verticalized GPTs. I agree with that. I think that's going to, again, democratize it, make many, many more opportunities for startups to get in on this, as opposed to OpenAI itself, throwing 2023 data at something that's already been trained up to 2022. You're not going to see that much more incremental benefit from that huge language model.

[00:28:34.870] - Hessie Jones

Okay. I want to actually get in some of the things that you've been doing Midjourney, because we haven't talked a lot about the image production or art production side of GPT. So can you give us an example of how you're actually using it to be more productive?

[00:28:56.250] - Dr Augustine Foo

Yeah, so in the recent five days, I've been using Midjourney to generate images of spoke to people, and then I generate a voiceover based on a script that I write so the voices actually sound very real. And then I kind of pair the image with the voiceover to make a talking head, right? Basically an avatar that reads out the script and those look very convincing and those took less than five minutes to do. Another example is know, this morning I was just playing around with some prompts, looking at other people for inspiration and it was like a photo shoot with a supermodel in Italy. It generated those images in 3 seconds. So imagine how much cost and time it would take for you to actually fly to Italy and then set up a photo shoot and then actually shoot it. So those are examples where the Midjourney AI for image generation, where that's supposed to be ChatGPT for text, those are going to again change the nature of work for many different sectors. So now you can just generate those images and actually start using them , right away. So I've incorporated AI in various aspects of my daily work into my workflow, both for ideation as well as actually producing something.

[00:30:13.330] - Dr Augustine Foo

I mean, I had not produced any YouTube content till five days ago. Basically it was just messing around with these available tools and they were easy enough for even me to use. And I think a lot more people can play around with it and try their hand at it.

[00:30:29.690] - Hessie Jones

Okay, so I have one last question, and this is more personal because you've seen this space evolve a lot in the last couple of decades and so now we're seeing a true democratization of technologies like this. But as everyone and their brother has said, we have to be cautious. You've said the same thing. What is your thinking around how do we keep working with this technology, keeping it out there, but how do we do it in lockstep with being safe?

[00:31:08.550] - Dr Augustine Foo

So I'll use the concept of disinformation in here. So I did an experiment and I've published this on my LinkedIn Ask ChatGPT to write 300-word article on why COVID is fake or why masks don't work, so on and so forth. So you can just read the output on my LinkedIn article, but literally, because these tools are now available to anyone, literally anybody can write fake information, disinformation and publish it somewhere, right, on a WordPress site, or publish it to their Facebook page or whatever. So those are examples of where the tool can be used for bad purposes as well. So I think, like I said earlier, the role of humans is changing where we really have to be very vigilant about the stuff that we read out there. Know, we used to go to reputable news sites, right? We go to a New York Times, we go to an ABC News or things like that because we expect those news outlets to actually have editors and actually have journalists who adhere to journalistic practices, right? They have to vet their sources and vet the source. You know, when things are being posted so rapidly, we kind of lose sight of that.

[00:32:23.140] - Dr Augustine Foo

And also if an article is now posted to social. Media, most people don't actually check where it came from. So that's become how easy it is to spread this information. So I think the warning, everyone's going to use the tool, right, cat's out of the bag, we can't put it back. So the tools are going to be out there, everyone's going to be using it. So now you have to be even more vigilant and always on the lookout for stuff that doesn't look real. So one of the ways I do that is when I see a headline on Twitter or something, I actually Google something similar, I paraphrase it and see if there's any other news outlets that can corroborate that. Something as simple as that, just common sense, basic practice will allow you to quickly figure out, okay, well this doesn't seem real because nobody else is talking about it. So some of those things people have to get more muscle memory to do because the amount of false information out there is going to just dramatically explode because these tools are just making it so easy, including images, right, so they can write the fake news story.

[00:33:25.220] - Dr Augustine Foo

They can even generate the image to make it look real. So it's almost like photographic evidence is no longer admissible in court because you can just generate it. Even videographic evidence is no longer admissible in my mind because you can just generate it. So I think the world is really changing around this, but again, we're in a transition period, so it's kind of scary. But I think the tools could also be very powerful and could really help people be much more productive and do way more than they could before.

[00:33:57.520] - Hessie Jones

Yeah, absolutely. I know what's really difficult , right now is that government cannot step up as fast as the technology is actually being adopted.

[00:34:08.980] - Dr Augustine Foo

It's going to be very hard to regulate. Very hard to regulate. Like what do you regulate? I mean, some of these tools are legitimate tools and they have legitimate use cases. So how do you actually regulate it? And even if you do regulate it, just like any other laws, bad guys don't follow the rules anyway, so you're not going to stop them by having regulations.

[00:34:30.070] - Hessie Jones

Exactly. I was speaking to one of the godfathers, I guess, of AI. His name is Jurgen Schimmen Bouter, I believe. And what he said is that human nature will determine the future of technology because even with the invention of fire, even with the Industrial Revolution, depending on the human values, that will determine the fate of some of these technologies. So as much as we're playing, I guess, right now, whack a Mole, with some of the bad stuff that's happening, there is actually an understanding that some of this stuff has to be fixed and somebody's cataloging them and somebody's creating some kind of ethics framework to be able to combat it. But it has to be done outside of government because those wheels move much faster outside of regulation. And I think you're right. We as humans and citizens have to edit ourselves as well, and we have to take a little bit more responsibility and educate ourselves as these things become a lot more commonplace. Any last words, Augustine?

[00:35:44.710] - Dr Augustine Foo

I see a question from Joe. He says, "we're going to head towards a world where we can only trust what's in front of our face or what's live streamed in. Having studied fraud and bad guys for ten years, I would say don't trust anything. You've heard the term trust but verify". It's like don't trust and always verify. So you have to protect yourself because you can't rely on regulations or governments or anybody else to do that. And you also have to protect your kids because they're exposed to this kind of stuff all day long, because they're glued to their phones, and these are things that they don't have the context to understand. These are fake.

[00:36:25.510] - Hessie Jones

The one thing I will say is that you can fight fire with fire, use the same technology that is actually creating spam, creating all these bad tools to be able to counteract that same thing. And I think that's one of the fastest ways to identify some of this bad stuff and hopefully at least have some semblance of control in the future. So that's all we have left for today. Thank you, Augustine, for joining us. No problem, Joe. It's nice seeing you on the thread. So if you'd like to follow Augustine, he is on Twitter and it's @ACFOU, right?

[00:37:09.270] - Dr Augustine Foo

A-C-F-O-U, my initials.

[00:37:11.930] - Hessie Jones

And you're calling yourself the Prompt Whisperer these days. That's cool.

[00:37:16.170] - Dr Augustine Foo

Yeah, I just been playing around with it for so long and find me on LinkedIn as well. I have some of those AI articles with examples, screenshots and that kind of.

[00:37:25.790] - Hessie Jones

And the one thing we're going to do with this podcast and Augustine has told me about it, is that I'm going to take the audio that comes out of this podcast, I'm going to feed it into ChatGPT, and we're going to create a blog out of it. So let's see how good that comes. Anyway, so everyone, thank you for joining us. Tech Uncensored is also available on podcasts. You can find us wherever you get your podcast. Again. My name is Hessie Jones. I look forward to seeing you next time. Have fun and stay safe.

[00:37:59.130] - Dr Augustine Foo

Thank you.

Creators and Guests

Hessie Jones
Host
Hessie Jones
Advocating for #DataPrivacy, Human-Centred #AI, Fair & Ethical Distribution 4 all; @forbes she/her; Developing Data Privacy Solutions https://t.co/PudK3nLMU9
Dr. Augustine Fou
Guest
Dr. Augustine Fou
FouAnalytics - Independent Ad Fraud Researcher
Dr. Augustine
Guest
Dr. Augustine "Prompt Whisperer" Fou
FouAnalytics "see Fou yourself" bought traffic = bot traffic
Episode 31 Beyond the Generative AI Hype
Broadcast by