07.03.2020   /   duration: 43 min
The Experience Lab
A Big Plunge into Machine Learning

A Big Plunge into Machine Learning

Rob and Jay are joined by Principal Developer and resident Machine Learning guru, Jerry Deng, for a detailed primer on the ins-and-outs of Machine Learning. Our wide-ranging episode includes a review of the relationship between ML and Artificial Intelligence, practical applications of ML, ways to get started, potential obstacles and a glimpse into where we think Machine Learning is headed.

Hosted By

Jay Cosgrove, Senior Product Manager at Digital Scientists
Jay Cosgrove
senior product manager

Special Guest

Jerry Deng, Principal Engineer at Digital Scientists
Jerry Deng
principal engineer

Episode Transcript

Rob Hall: This is the experience lab, the official podcast of Digital Scientists from Atlanta, Georgia. We’re an experience lab that explores and builds digital products. My name is Rob Hall, and I’m the Senior Director of Product at DS.


Jay Cosgrove: And I’m Jay Cosgrove, Senior Product Manager. 


Rob Hall: Thanks for listening.


We’re here today with our esteemed colleague, Mr. Jerry Deng.


Jerry Deng: Hello, guys.


Jay Cosgrove: Jerry, so excited to have you man. 


Rob Hall: Jay, can you explain what Jerry really does? 


Jay Cosgrove: Jerry is a principal developer that is one of our lead AI and machine learning engineers.


Rob Hall: He’s basically a genius is what you’re saying? 


Jay Cosgrove: Pretty sure he built Skynet.


Jerry Deng: I don’t want John Conner to come and kill me, but anyway.


People come to me with problems and usually clients, and I try to figure out ways to solve it.


Rob Hall: What we’re here to talk about today is machine learning. First of all, what is it? What is machine learning? And is machine learning the same thing as artificial intelligence? Is it different from AI? And if so, how?


Jerry Deng: AI and machine learning just little differentiation there can give you an idea of what machine learning is. Probably during your childhood, you play video games and came across the term AI as a way to actually like those NPC or enemy actually find its route and come attack you or some basic baked-in logic like that.


Rob Hall: You kind of have this classic sci-fi image in your brain about what artificial intelligence does. And usually it’s the bad guy, right? 


Jerry Deng: Yes, yes, it’s usually the bad guy. But if you bring it to practicality and today’s application, actually a lot of things can be considered AI. For example, your google maps that you like, “take me from here to the restaurant.” And that routing mechanism or algorithm is considered AI. It’s just baked so much into our life that we don’t call it AI anymore. It’s just so given. Even simple computer logic, if and then or else, that kind of rule, they could be considered AI. And machine learning is a subset of that. It’s trying to mimic intelligence like AI does, except it’s using repetition, and figuring out a pattern out of that repetition. Sort of, that’s where the term learning came from.


Rob Hall: Just to get like super literal and simplistic with it. If I’m a machine, and I am being trained to recognize certain patterns and data.


Jerry Deng: Yes. 


Rob Hall: I have a set of data that I’m trying to analyze, and I create an algorithm that loops over that data and tries to look for certain types of patterns or a certain type of information within that set of data. You could consider that machine learning.


Jerry Deng: Yes. And you can actually give it a simpler example. For example, in stock trading, there is moving average. Moving average itself is the simplest form of machine learning. It’s basically you have a bunch of price data points, and then you draw our slope line diagonal line up, or downward. And that’s in a way that’s machine learning to. It’s just the simplest form of the machine learning. I think it’s linear regression, which is just calculating that particular slope and positioning it on the amount of price point dots. And that is considered machine learning to.


Rob Hall: I’m pretty sure I remember learning about linear regressions in high school. Yeah, that was a long time ago.


Jay Cosgrove: Did you, Did you finish high school? 


Rob Hall: Oh, cute. 


Jay Cosgrove: Jerry, I’m, I’m curious because you use the first example of AI as like bots, or what I would call bots and like games, right? Like you play against that are trying to figure out. The first thing that always comes to my mind is 007 GoldenEye on the N64, for all this used to play that, and the bots coming after you and you would try to get the Golden Gun. So what, what differentiates maybe in that example, between just strict machine learning and AI?


Jerry Deng: In that particular example, the bot is actually more of an AI than machine learning because it’s like a pre programmed route, usually, that it takes to chase from his starting point to the point that your character is figuring out a path to it. But it doesn’t really learn from past experience. It doesn’t grow stronger over time. 


Rob Hall: It’s not really, okay, so it’s not smart. It just has its defined boundary of operation and that’s it. 


Jerry Deng: Yes. 


Rob Hall: Okay. Now, are they getting smarter in games where the bots do learn?


Jerry Deng: Yeah, for sure. I mean, there are point of view eSports open AI is participating in entering one of the esports tournament where they train the machine to compete with the human player, and it beats quite a few elite level players onlin. And similarly with deep mind has trained something to play Starcraft that almost consistently wins over the strongest human player.


Rob Hall: Where, at least in your expertise Jerry, where do you draw the line between machine learning and artificial intelligence?


Jerry Deng: Well, today when people talk about AI and machine learning, they are mostly referring to machine learning more so than AI. It’s just if you ask a bunch of like, experts in the field, they would all give you a slightly different definition. So for the sake of for the sake of keeping the reference point the same I think most people are referring to machine learning as the term instead of referring to AI anymore. Because AI is so given even an IoT product, even If This Then That kind of like, product is considered an AI to a point. So, it’s too broad of a term to actually discuss.


Rob Hall: So it’s cool to just say, “Yeah, we’re into artificial intelligence.” But not really necessarily truly doing that. 


Jerry Deng: Yeah, you could totally be doing like an If This Then That kind of statement logic and call yourself AI.


Rob Hall: Oh, well, then I’m an AI expert. I’ve done that. Just kidding. What is machine learning not?


Jerry Deng: Oh, it’s not hard code. It’s not building in rules for to actually just follow like 007 bot example. You try to craft it so general that it can react to different situations. So anytime you try to hard code stuff in your solution, that’s not machine learning. In machine learning, the main differentiator is actually it learns over time, learns over past experience. And that’s the key component.


Jay Cosgrove: So if the situation that you’re setting up does not have a learning mechanism as part of it, then you may consider AI, but you wouldn’t consider it machine learning at that point, because not actually becoming smarter. Okay, cool. That makes sense. 


Rob Hall: Like Google Analytics, for example, there’s this whole question of machine learning data science, analytics, blah, blah, blah, blah, blah. And as buzzwords they’re very easily interchanged, I think. Even though they mean very discrete, different things.


Jerry Deng: In your example, you’re talking about analytics. And that’s definitely a field or where it can shine. You can use machine learning and say Google Analytics to predict trends of your inbound visitor and their likelihood of dropping at certain stage of their visit to your website or interacting with your mobile application. You can use those data to enhance your user experience by knowing exactly where the user drops and how do you actually improve those steps of the engagement to improve that. So, so that’s definitely one area of machine learning. But it’s way more than that, you could apply it in pretty much any field. Pretty much you if you give me an example, I can give you an example of utilization of machine learning. 


Rob Hall: Popcorn sales.


Jerry Deng: Popcorn sales.


Rob Hall: I want to sell more popcorn.


Jerry Deng: Okay, if you’re wanting to sell more popcorn, you want to have your sales data available by region, by time, and the particular distribution area like for example, movie theater and whatnot. And by capturing those data, you would know when and where the popcorn gets sold more often than not. Maybe a way to figure out how like, for example, you have a lot of sales on movie theater in the East area, but you’re not doing much on a West area. So there could be data that is missing that I infer that from your past experience. You’re selling more in the east, but there’s a lot of opportunity on the west side theaters that you’re not missing out. And it could be other venues, too. You’re doing really well in theaters, but you’re not doing as well in schools.


Rob Hall: So that’s just like a general sales analysis right now. And I guess the differentiator for utilizing machine learning in that instance is I couldn’t find that data out if I just cobbled together my own sales report and figure out the geography chart of where my product is selling and where it’s not. But but machine learning can help automate that process and draw those conclusions for me. 


Jerry Deng: Yes, we come up with those analysis because we dealt with enough of this, and we have those patterns sitting in our hat. We know that’s the missing gap. But without actually specifically telling the machine to do it, we do the analyzer, we’d have to come up with those like directions of getting data. The beauty of machine learning is, in the ideal world where you can feed it every data point you can actually throw at it. It’ll figure out where the pattern is and where the missing pieces are, where the opportunities are. In that situation.


Rob Hall: Yeah. 


Jay Cosgrove: Would you say another benefit of that because you were talking about it, removing the human component of it, like the stuff that’s in our heads is another benefit of that is that it would do it consistently? Because I can imagine that as being a problem if, you know, if you have three different sales guys that might interpret data differently, this would be a consistent source. 


Jerry Deng: That’s true too. We think a little bit like our learning pen works as well, we learned that we should look at the West instead of just focusing on the east side of the theatre market or whatnot based on our past experience. But our past experience might be limited, because we only have so much experience doing it that way. And, and Jay, you brought up a very good point you could be limited because of your psychological state as well, or your preference as well. So using something like a machine learning model will be more objective about a situation and won’t be too bias towards certain directions. And because of that nature, it will discover things that we usually miss or sometimes miss.


Jay Cosgrove: I’ve only been here for going on what, three years now? But you’ve been here a bit longer. What was our first break into using machine learning?


Jerry Deng: At the time, I was very interested in this area. And one of our clients came to us with a problem of identifying handwriting and hoping to turn them into digitized format so we could, like speed up the importing of those handwritten data. So that’s my first crack at and turns out that’s one of the harder problems, if not the hardest problem to solve.


Rob Hall: Yeah because our handwriting is terrible.


Jerry Deng: Yes. Yep, you could have very like, Rob, your handwriting was perfect when we were using it.


Rob Hall: Thank you very much.


Jerry Deng: Your handwriting that we used to actually like test the system. But somebody else like at the time we had an intern that was actually, I intentionally told him to actually write it in a way that it’s hard to tell the difference and


Rob Hall: I think that when you Pierce, bless his heart.


Jerry Deng: Yeah, he messed it up enough. But that particular project exposes like it gave us a chance to actually apply my learning on the side into an actual project and have to utilize two form of deep learning at the time using a combination of something called convolutional neural network, which handles image data. And something called 


Rob Hall: Hold on! That sounds really convoluted. 


Jerry Deng: Yes.


Rob Hall: Convolution neural network.

Jerry Deng: Network. Yes. 


Rob Hall: Do we dare ask what that means? 


Jerry Deng: Oh, gosh. It’s gonna be hard to explain it in very layman terms. But it’s like applying filters on a piece of an image. It’s kind of like our eye when we look at a cat, right, we look for a pattern of like rounded shapes like an eyeball, and we look for a head like shape for cats head some ear shape half circle ish kind of shape. And that sort of thing is what the training will give a filter the ability to actually look for in those images.


Rob Hall: Handwriting recognition is it’s fascinating and like Apple just announced the other day, even more improvements to the iPad OS. So now you can, using your Apple Pencil, scribble handwritten notes into just normal system dialog boxes, and it’ll recognize what you wrote and take action based on on your input. So that seems to be like taking that example to an even greater extreme.


Jerry Deng: Absolutely. I was very excited and blown away by how far has come since about three years ago 


Jay Cosgrove: Because like three years ago when you did this, Google’s OCR wasn’t even a thing.


Jerry Deng: We actually when we were faced with this project, we try it on Google’s OCR try it on Microsoft’s OCR, tried it on everybody and nobody solved it well.


Oh, wow. 


Rob Hall: It’s interesting to see how that technology is finally starting to make a leap forward. I remember so I was such a nerd as a kid, I saved up my money and bought in the mid 90s, the old Apple Newton messagepad, which was their first PDA, you know, decade before the release of the or 15 years before the iPhone came out. And its big main feature was handwriting recognition. And it sort of worked. If you wrote very neatly, if you went through its training. It had a training model that asked for input samples of your handwriting. And if you trained it very carefully, and then wrote as neatly as you possibly could, it worked pretty well. But it was slow and the device was big and heavy and clunky, and it certainly wasn’t as versatile as what we have today. But there’s been attempts to do this kind of handwriting recognition at a high degree of accuracy for a long time. But it only seems like it’s the last couple of years that people are taking another look at it. 


Jay Cosgrove: Yeah. Is that because of the datasets? Like the inputs becoming just larger?


Jerry Deng: I can try to answer that a bit. It’s a combination of two elements I think. Jay, you nailed on a really good point. Somebody like Apple would have the capability of collecting giant sets of training data from any kind of annotators, anybody writing handwriting examples. That’s one area and second area is the advancement of modern processors being able to run very deep neural networks through a small chip sitting in an iPad. Those two things in combination of the advancement of the latest and greatest neural network encoding and understanding these texts, advances all that into a few days ago, like Apples product offering.


Rob Hall: So that’s that’s definitely a differentiator for the the apple silicon is that a lot of those  more sophisticated machine learning operations are done in hardware, not software.


Jerry Deng: It’s usually a hybrid, actually a combination of both. It needs the algorithm advancement and also the machine getting cheaper and cheaper to produce and smaller and smaller to fit into an iPad. But I think the biggest differentiator I think is probably what Jay mentioned about the amount of data that we could collect. Imagine like them sending an Apple Pencil and a tablet to a giant training farm for people to actually collect handwriting text. You will have way more data than we had when we started working on that project. But that’s, that’s a key components to today’s machine learning task or project task.


Rob Hall: So I have to ask, Jerry. The machine learning model that you built for handwriting recognition – when all was said and done, Did it work?


Jerry Deng: I remember vaguely about 85%.


Rob Hall: Yeah, that was pretty good. 


Jerry Deng: Yes, it’s missing maybe a couple of character per line. You always can do better over time with machine learning and training and data collection, everything. Like a lot of use cases, we have to stop because we have a delivery to meet, and we have to trade off and stop at a certain threshold, for example 70 or 80%. Is that good enough for the particular product or project? It might be depending on your use case.


Rob Hall: It might be enough to at least get them started, and then give them, you know, room to grow, room to improve. 


Jerry Deng: Yes. 


Rob Hall: So how are we utilizing machine learning now? We started with this experiment around handwriting recognition, and now what are we doing with it?


Jerry Deng: There are a couple of models that we particularly built for recognizing identifying certain image elements inside an image. And those particular ones are just a binary whether that object exists in the image, we can achieve 90% to 99% accuracy fairly quickly. In general, like when you have a small set of categories, maybe like a binary category, whether it’s a yes or no, you can achieve very good state of the art success very quick, within hours. 


Rob Hall: Oh, really? 


Jerry Deng: Yes, this particular client and use case we build the two models to identify pieces or components of a marketing material. And at the end, we are trying to combine those together to generate marketing material automatically by the machine. In order to do that we have to be able to identify those elements out of the image that we’re looking at. Whether that exists in there, whether that see a particular image type that we’re looking for. This is perfect for a machine learning project to be solving because we can’t handle to craft a detector of object type. Let’s say, you can’t hard code a cat detector with just an algorithm alone.


Rob Hall: Or take the example of a company logo on a website. If they named the Logo logo.jpg, then fine. But that’s not always the case.


Jerry Deng: Yes. That’s not always the case. And that’s why we can send it a bunch of images and detect that particular image being a logo or not a logo. And for that particular use case, it can achieve very good results. You just need to have a good amount of training data to feed into it.


Jay Cosgrove: So in order to train machine learning models, from what you’re saying, is we just need good input data. That what you’re pretty much telling the machine this equals this. And then it looks through enough of that data to learn it over time.


Jerry Deng: Yes, yes, when we were taught as a kid, look at an apple versus an orange. You just need to see enough apples and oranges to be able to tell the difference. I think that’s how we learn as well. So in particular, in this kind of image detection type of task, you just need to collect enough of the apple examples versus the orange examples. And when I say enough, in today’s day and age, you can probably get away with just 1000 or a few thousand of those images. And then you can actually have a train model that works fairly well on detecting images with orange versus apple.


I don’t know if that answer your question.


Jay Cosgrove: Yeah, absolutely. So what you’re saying is, is that you feed it a set of data. So you say, here’s all the images of an apple. And here’s all the images of an orange. If I’m correct these would be called annotations on for the image. And then the machine would take that ingest it, and then build its own set of rules based on that data.


Jerry Deng: Correct. That’s the point. I think machine learning just kind of built the rule, through the training process, in a form that the model can actually capture and retain. And those rules are set in a way that we don’t, we can’t really like pick out, but when you run another image of video through it just outputs orange and apple by the end of it.


Jay Cosgrove: That’s awesome. Yeah, that makes a lot of sense. 


Rob Hall: That was a really good example, Jerry, just to break it down into a very just simple wrote understanding of how machine learning can be a benefit, right? Yeah. Never mind just how it works.


Jay Cosgrove: Think about the alternative to that. And this is really what I think Jerry is getting at that the machine learning solves for creating some sort of, like rules based system. So let’s take a text model, for example. So say you feed a bunch of text. And we’ll use the same example of this text is talking about an apple in this text is talking about an orange. And you could create a gazillion rules that learn to interpret that in manually, right? Like where it’s placed in the sentence is a subject is it the object and then hope at the outcome that you still are understanding it correctly. Or you can just feed a bunch of data to it to say this sentence talks about an apple, this one talks about an orange and then the machine creates all that logic for you, without you having to actually think through it.


Jerry Deng: That’s correct. It’s the classic bottom up approach if we were to create the rules, and try to see if the sentence talks about red, round, reflective surface or something like that, to describe an orange or an apple. But, but that doesn’t capture most of it. It could, right? It could be some really arcane example of describing the apple.


Jay Cosgrove: Right? Yeah. It could be a green apple. And how do you solve for that variation, yeah.


Jerry Deng: And red and green are just two of those colors. You have a little yellow, you have a little bit. Yeah. Variations of different color.


Rob Hall: If I am trying to explain to a customer who has expressed an interest in machine learning. Where do they start? Where do I point them? We’ve highlighted some very, like fundamental basic use cases, but trying to contextualize it a little bit for our clients, or anyone out there who has an interest. Where do they begin?


Jerry Deng: That’s a very good question, because it’s very case by case. There are very obvious use cases for identifying image of vision or video type of use cases, right? There are very specific use cases of classifying this blob of text as positive or negative. One of our client gave us work on doing sentiment analysis on social tracking, whether the client is being talked about negatively or positively. But yeah, those are very cookie cutter kind of use cases, right? But you can have different use cases for different kinds of business. For example, one of my friends actually came to me, he was working on a, like online car sales kind of startup. And he was asking if we can actually pick out the primary image of the car that they get from the dealer. And that’s very easy problem to solve for. You just need to have somebody actually pick out the primary image and if you already have a setup database with a primary car and vehicle, then you can train the model to actually pick out the best image to show for on a hero or the primary image fairly quickly.


Rob Hall: So but in that example, you’re saying you’re not just picking out the first image in a sequence, you’re trying to make a qualitative assessment of the images themselves and select the best one. Correct that so you’re truly adding like a learned human element to that automation.


Jerry Deng: Yes. 


Jay Cosgrove: Would that be summarized as like a recommendation piece then? Right, like, so for user, I’m thinking of like a new customer coming into DS and us trying to make the judgment call of like, should this be rules based or not? If someone hints towards recommendations, would we pretty quickly lean towards machine learning?


Jerry Deng: That keyword itself is probably a good hint. But it all depends on I guess the trade off, right? Sometimes you use something because it’s fast and cheap, and it’s working. Sometimes you use it because since it’s the only way to solve the problem. So for things that you know that you can’t solve it by building rules, you probably can’t solve it by building rules. So you look at whether we can collect enough of those training data and train a model for it.


Jay Cosgrove: So the indicator might less be the key word, and it might be more like how complex would the rule based system be to solve this?


Jerry Deng: Yes, and if it’s even solvable, by rule-based system.


Rob Hall: And just to clarify real quick, when you say a rules based system you’re referring to just basically simplistic statements or configuration methods that say, if this than that, if this than that, if this than that. So if I want to receive a notification, it’s a yes or no. If I want my background to be a dark color, it’s a yes or no. If I want my alarm to go off at 6:30 in the morning there’s no learning involved. It just is whenever the selection is.


Jerry Deng: It’s just a branching tree. If you think about it. It’s a tree walking process.


Rob Hall: Let’s take the example of waking up in the morning. So a rules based approach to that could be, well, Jay sets his alarm for 6am every morning, if he turns off his alarm three times, then we need to activate a second alarm to go off at 630 to make sure he’s not late for work. A learning model would be, well, I’ve noticed that Jay’s done that X number of times, maybe I need to come up with with some alternative method to wake Jay up, increase the volume of the alarm or recommend some other response. 


Jerry Deng: Right.


Rob Hall: That isn’t necessarily hard coded.


Jerry Deng: Yes, that’s a very good example, right? It could be like, increase the volume, increase the frequency, or change the tone, or, or use other connected devices that you can actually make sound out of.


Jay Cosgrove: Sounds like this is going to be a brutal morning for me.


Rob Hall: Yeah, turn the lights on in Jay’s room.


Jay Cosgrove: I love this because like as a as you were describing that example, Rob, I was actually thinking of one of my first smartphones was like a Motorola razor maxx. And one of the distinct features back then was like, there was very brand specific apps, if you remember. And there was this app that I think ended up rolling out to everyone else, but it was a I forget the name of it smart actions, I want to say. And it pretty much did that if then, you know, If This Then That kind of situation. So you could I loved it, because you could set all these triggers, just like you’re describing. If I snooze my alarm, then do this. And you could set all the rules yourself. And I remember thinking it was such an amazing feature. And then within a month, I didn’t use it at all. And it seems like the progression of technology is saying, okay, we can’t rely on users to create these automations themselves. Instead, we’re going to look at their action, use that as training data and then just recommend to them what they should do so that all they have to do is hit yes.


Rob Hall: So I see that happening with my wife’s Apple Watch. So she uses the Apple Watch to track all of her physical movement activity and exercise and things like that. And it like really the only main configuration is this general level of activity. So it gives you three options. It’s like, low, medium, or high. And then from there, if she hasn’t gotten a certain amount of physical activity in a certain day, it starts to remind her that. And over time, I guess maybe a month into her using the thing, it starts giving her reminders throughout the day. Hey, it looks like you’re not exercising as much as you normally do. Hey, usually you walk a little bit more than you have so far today. Don’t forget to get your steps. Hey, you’ve been sitting a lot today. You don’t normally do that. Maybe you should stand up. And it’s funny because in some ways it’s like an antagonistic mother. But or I shouldn’t say mother, an antagonistic parent. But at the same time, you can tell the types of notifications and reminders she’s getting have evolved. They’ve changed over time. And they’ve gotten smarter based on a broader view of her overall trend of activity.


Jerry Deng: That’s what a lot of us notice. And that’s basically the trend have a lot of bigger size player that can have access to a lot of the data, and they can actually do a lot of good recommendations. We see that on a message we type. We see that on Gmails recommendation to complete autocomplete your sentence. We see it on Amazon’s product recommendation, we see it on the Apple Watch notification or reminder of your activity. Pretty much every where we turn with a gadget. And it feels like a little intrusion of privacy a bit. But I guess, I guess the reward is we have better tooling at the end of the day and that’s a different conversation on the privacy issue. But, but the output is, it’s amazing.


Rob Hall: Jerry, if I’m a developer, and I’ve never done anything with machine learning before. How might I get started? how might I learn about it?


Jerry Deng: I’m the type of person that like learned by building. So you can actually learn the high level concepts everywhere. There’s tons of videos and blog posts out there, but I would do my like, get my hands dirty and build something like that to get a sense of it. And that would probably sit in your brain more so than you do if you go to a typical class. That’s how I would see it. But to get really good at it, you do want to know the fundamental the why, the inner working of the optimization technique, the math on calculus, the derivative. The how it actually does the training of backpropagation. Or learn from the mistakes of adjusting those error rates. You would learn those later, but I think the best thing is actually have a rewarding lower experience with machine learning. And that satisfaction is just reinforce your willingness or wanting to.


Rob Hall: You hear that kids pay attention and calculus class.


Jerry Deng: Yep, I didn’t remember most of my calculus.


You just have to.


Jay Cosgrove: It’s like riding a bike. You get right back on it, I’m sure. 


Rob Hall: Yeah. 


Jay Cosgrove: When you have new clients coming in, what do you think some of the biggest challenges are for implementing machine learning into the into their architecture, I guess.


Jerry Deng: You spend most of your time figuring out how to get training data. In the handwriting one we had to spend a lot of time generating the synthesizing handwritten text, from handwritten fonts from a lot of image techniques to create like a fake image like handwriting image. In the recent task on collecting, like, examples to identify whether that’s a logo or not, I spend most of my time finding really weird logo images. Really weird non-logo images to train the model so it actually trims out the outlier. And, and that ended up being the most daunting task. And recently, I’ve been thinking about a buyer profile profile kind of model for one of our projects as well. And that, that requires collection of sales data of the past and sometimes people don’t keep them. People don’t have a good CRM system. People don’t have a good accounting system that glue all those together, and you end up spending the time collecting them, or figuring out a way to streamline the collection. And that turns out to be the bigger problem. The model side, it’s gonna be some work, but it’s way less in terms of proportion of time spent on. So like, if a client comes to us, we would look at the problem that they’re trying to solve. Whether there is data that is currently collected in their database system, that they collected on the analytic system, that they collected on the image of video archive system, and figure out whether we can actually save the collection? 


Jay Cosgrove: Like, legally, that’s what you’re saying. It’s like can they legally save? Yeah. Wow.


Jerry Deng: Yeah. Yes, that’s another that’s another problem, right? Whether you can legally use those stored data or image or video. That’s another problem. But, but a lot of time is actually getting that bundle of like data. And if we can, how do we actually build that into their future user journey, or product workflow?


Rob Hall: So here’s a question: How often have we encountered a challenge where someone is creating a model to analyze a certain set of data, and that the data that the model is actually performing analysis on, they’re legally safe to process that data? The customer has the rights to that information, but the training data that’s used to inform that model might be questionable, in terms of legality. Is there an ethical dilemma there? Great. We used all this training data that we we found on the internet that may or may not be proprietary, may or may not be copyrighted, may or may not be in the public domain. But we trained this model that can now go do something else that we know is legal.  How do you draw that distinction? How do you navigate that?


Jerry Deng: I don’t think anybody fully has the answer to that, since this area is so new in the eyes of the lawyers and the legal system. Obviously, if the data is completely captured by your privacy policy, and it’s capture in the process of you using the application or your user using the application, then those data you own them. You obviously have the legal right. I’m not a lawyer, but the data that is sitting out in the public that’s that’s not something that I can answer. However, if you come down to the model that actually captured the learning, those infered rules and whatnot, that is setting the model, I don’t think those would touch any copyright, like legality issue itself. Those rules are inferred by the data itself. And I’m sure the the legal system is catching up in this particular world.


Jay Cosgrove: I’ve worked on a few projects that have used machine learning models with Jerry, and we have a kind of hit on this legal issue before. And it’s not an easy one, to even kind of ethically inside yourself understand, because I think the internet in general, was kind of founded on kind of a concept of open source, right. But then when you start entering in using open source data for the training of for profit businesses, that’s where it becomes really complicated ethically.


Rob Hall: Well, and then you get into this other ethical debate that just because information is in the public domain doesn’t mean that it was intended to be there. 


Jay Cosgrove: Exactly. Yeah. Or to be used for a specific purpose like that, you know? 


Rob Hall: That’s right. 


Jay Cosgrove: I think one of the big differentiators that I’ve seen in the argument is like, is this data potentially being used to harm someone, maybe not physically, obviously. But like, take for instance, if your data is being used to help train a model that would then benefit your competitor, right? And your competitors utilizing it. I’d be pissed about that. And I think that’s rightfully, you know, sure.


Rob Hall: Yep. That’s fair.


Where do you think all this is heading Jerry?


Is it going to lead to the destruction of the planet? 


Jay Cosgrove: Or maybe Skynet? 


Rob Hall: Yes. Is it gonna become sentient and take over? 


Jerry Deng: I think the real question is, has Jerry already built Skynet? And even this podcast is part of his training data. 


Rob Hall: That is a very good question. Please explain it, Jerry.


Jerry Deng: I don’t know how to begin. I guess there will always be a fear of machine learning advancing too far. There are already examples of it, for example, deepfake. Generating images or videos of your president speaking some random garbage that somebody scripted.


Rob Hall: I thought that was just the news?


Jerry Deng: Haha, you can do that. You can generate fake articles too. So those are obviously the negative of it. In any technological advancement in history, similar to nuclear power, to electricity, to the advancement of the chemical industry, there will always be good users and bad users of the tool. And it’s really the person behind the tech that determines whether that could be destructive. My take is we are going to be faced with advancement. The tool is going to be advancing, the tech is going to be advancing. You can avoid it, but avoiding it doesn’t change the world from using it and advancing it in some many ways. So embracing it, and it will always be a cat and mouse game in this process. You will always have bad users and you will also have good guys that come up with ways to counter the bad use cases and that will continue on. And I don’t think machine learning is anywhere close to being a general intelligence to figure out and think like a human. But they are all weak specialists. They can be very good at certain areas, like even better than humans. Like the chess player, alpha goal, the game player, like determining whether that’s a cancer scan or that sort of thing. They’re going to do a better job than humans, but those are very narrow areas. It will never get to the point of having a desire or growth. But a bad player can actually bake some of those bad behaviors into the model and you’re going to have to deal with that. I mean, we have to in the future. It’s just a matter of cat and mouse game at the end of the day. That’s how I see it.