Thanks, Margaret.
So, too, is like I said, I'm Jerry Kurata and I'm fortunate to get to moderate this next section on using Tensorflow Lite to expand our machine learning on both mobile and edge devices. We have a couple of presentations on this. Our first one will be done by Margaret. So let me just introduce you. You have seen Margaret doing all this management and things of that behind the scenes, but she's also a key organizer of one of the largest communities, which is GDG Seattle, which hosts a lot of events throughout the year, which we mentioned earlier. In addition to that, she's an accomplished author. She has written many articles on machine learning development, including a recently published series of articles on how to use Tensorflow Lite on your phone to create anime versions of Selfies. It's really cool. And I suggest you all check out the medium article and check out her code. But even as busy as she is, Margaret is constantly helping other people. And to help us today, Margaret is going to go over TF Lite and its awesome tools and community that will make your TF Lite experience enjoyable fun.
With that, I'll let Margaret take it away.
Right. Thank you so much, Jerry, for the introduction. And, you know, like Jerry said, we are part of the Google developer experts, the machinery GDE community together. So we've worked with each other. And today I am very excited to share with you about Tensorflow Lite. So I will discuss about what is Tensorflow Lite, why we need it and mostly focus on how to use Tensorflow Lite, and now I will spend some time talking about the community, the awesome community. With TensorFlow 2.0, the whole model saving has been standardized into the save the model format you can still save as a Paris model, but we recommend you save you save the model which you can deploy to cloud, web, mobile, Iot, microcontroller, and GPU. So Tensorflow Lite is a framework with a set of tools for deploying machine learning to mobile and embedded. Unlike earlier when you heard about some talks on the Jesse James, it's also a language you can use to write your models. Right. But TF Lite, it's it's the framework of the tools to help you to deploy. It's not something you use to write your machine learning training code. And there are many components. There's the are the key component is a converter for converting to the TF Lite model format and then the interpreter that you can use to run the interest. And then there's that TF Lite ops and then also interface to hardware accelerations. In addition, there are also some other features and tools in addition to these four main components that I mentioned. So why do we need to solve all right, so when we're on machine learning model and device, we get the benefit of access to more data, faster user interaction and preserve privacy. But there also comes the unique constraint of less compute power, limited memory, battery consumption, and you have to worry about the model size and inference of speed. So first of all, I was created to help deploy models to mobile devices in order to work with all these strings that I just mentioned.
So today, there are more than four billion devices and more than a thousand plus apps in production running with Tensorflow Lite.
And how do we use Tensorflow Lite? Well, there are these beginner getting started topics and there's also advanced topics. So to get started, we talk about like how do you save the model and do the model conversion? And also for people who are not so familiar with machine learning, there's Tensorflow Lite model maker and metadata and Cogen and also the model ML model binding Android studio also helps with that using the model in Android. So these are the beginner topics. Some more advanced topics will be how how can you reduce model size with a quantization and pruning and how do you speed up the inference was GPU delegate using Android, your network API, how do you do benchmarking or profiling? So, today, my goal is to more help you get started as well as talk about some additional resources that comes from the community. So first, let's take a look at a very simple example, end to end process, going for training a model with two factories and convert that model to TF Lite and then run that model on Android. Very simple. So from this diagram, you will see that when you train the traffic cameras, you have the option of using the carrier sequential, functional or model subclassing. And when you convert, you can use either Python code or you can use command line. And then here's all the steps of how to run it. Android, which I will go over the details later.
So this first example is very simple.
We use amnesty for those of you who are not familiar with machine learning, the amnesty data is what people will use for the whole world. Like sixty thousand training data said ten thousand test data set and 20 by 20, a gray scale images with ten classes ranging from zero to nine. And we can also use it for benchmarking machine learning algorithms.
I have included the CoLab here for writing the training code, you import the data, you define the modern architecture, you train the model, and then we save and do the conversion. And this particular example we're saving as a Paris model for and before we do the conversion. So why don't you write the code you can call model that submarine to show the model architecture, and the reason why I'm showing the model architecture is like a lot of times when you're working with the conversion, you need to know or even the model implementation, you need to know kind of your your input shape your have a data input to the model and then also the output. To visualize the model, you can use visualization tools such as tech support, I use natural this is like an open source.
Opensource.
A tool that once you install it, you just drag and drop the TFI model into next room and then you will see the model visually. So there's later on I will talk about the metadata, which is a new Tensorflow Lite tool. It will also allow you to inspect the model.
So once you train the model, you know, it's just simple examples, I didn't really show the training code there, but once you train it and you can save the model and you can save it to either save the model or Charis Model Intesa for two point o the saving methods, actual default or the save the model format, not the cameras model format.
The save the model format is recommended. When do you use which especially if you are if you are sharing a pre trained model on Testopel, for example, you definitely need to use the save the model format. And it's also useful if you don't know the deploy target. But let's say in this case, I'm just writing a very simple example was missed. I know one my deployment target is to enjoy. I'm writing the training code, I'm saving it and I'm converting it. So I just use Carus.
And I'm showing you the example.
Where?
We have two options to do the model conversion. You can either use the command line.
Here or you can use the Python code, which is recommended again, when you try to save, you can do a save the model or Charis Model.
So the command line method will be helpful, like, for example, if somebody else has trained the model and gave you the model. So all you have is some model format like save the model actress model, then you will run the command line to convert that. But the Python code, writing the Python code to do the conversion is recommended. And here is a very simple example that you create a converter and you said the quantization to be true and then you will convert the model. And then after that you can try say if you are in a CoLab, you create an actual TFI model file and then you can download the model files.
So for those of you who have actually worked with a template for a long time, you probably remember a couple of years ago trying to do a conversion was not as this simple was just a few lines of code.
And back then there was like really complicated. So now you have converted your model and to the TF Lite format before you deploy it and after you convert, it's a really good practice to try to validate the result and make sure that, you know, the conversion process didn't really mess up your model. And you can look at the tenso flow inference and also the testable inference and then compare the results.
So now that we have a model, we here's how we will implement Android, and I know not all of you are Android devs, but I'm still going to go over the steps just to show you how complex it is.
So manually, you have to do a lot of manual work. You have to put the motto under the assets folder, updated dependency input, et cetera, et cetera. So dependancy, we have to come here in our studio, we have to manually create an ASUS folder, put the TFI model there, and then we will go to the built cradle to manually set a bunch of dependencies to Tensorflow Lite or, you know, like if you work with metadata, you said all those dependencies and and also, for example, set to say not compress a file if it's a PTFI format and if we have an input image in this case, if we're working with amnesty data, the digits, when you have the image that captured either by drawing on canvas from a custom view or getting an image from the photos gallery or getting an image from a third party camera or live frame from, say, your camera, right after that, you have to make sure. You know, earlier I talk about the motto is training with 20 by 10 year, you have to make sure your input images like that. And also depending on what kind of model we're talking about, if it's inception or mobile, not, etc., you have to match up that input, shape Ananda's image pre processing to convert from a bitmap to buffer, because that's what TFI works with.
You have to also normalized to pixel value to be a certain range, just like what you do in the model training. And you need to also convert from color to grayscale. Because when you use the camera or the input methods, as I mentioned earlier, they have other several options that all have images in color. And then because the model is training grayscale, you have to do that conversion. And then to run the insurance, you have to load the model, the TFI model that that's created under the assets folder, and you have to also load the and create the interpreter. You then run the insurance and which will give you some sort of results. And it's a real probabilities each correspond to a category. In this case, we have 10 categories going from zero one to three, etc. to nine, and then you'll find a category with the highest probability and then output the result to the UI. So this is the example code here, it's a digital recognizer where I will draw with my finger like a number eight. And this is a custom canvas. And then you click on the classify button and then it will the model will predict it's in a.
So I just walk you through an example with amnesty, which is a toy example, the modern architecture of super simple, the data set is super simple.
So training with terrorists is fairly easy. I didn't even include the code earlier because it's just a few lines of code model conversion to. I was also fairly easy, as I showed you earlier, either of your command line or python code.
Either way, just a few lines of code, but the Android implementation was a lot of work.
You know, you don't have to understand to understand what I explained to you earlier. With all those manual steps of placing file in their dependency update and worrying about image pre processing input Tenzer shape if it's a color or grayscale, post-processing, etc, etc.. Right.
Anyway, so I have included my blog post there for you to read in more details. So how can we make it better?
You know what I just described to you, especially the Android implementation was very complicated at times of the world. Last year, there were quite a few new features announced, you know, like the festival, I suppose, library, the metadata, the model maker later on. I will also talk about the animal model binding and all of these will make it much easier to work with Tensorflow Lite. So I will talk about the the metadata and the Cogen briefly, you will use taping, stultify that you support this library, will provide you with two functionality. Why is metadata, which this will contain information about the model? I didn't include the details here, but you can run it through a command line script. Currently, the metadata examples if you go to the official telephone dog. Documentation, you will see the script for image classification, I think there's Sarposa there, but what's available or support it, it's also cell transfer supports the metadata and they don't want to talk about a example that I'm working on called a selfie to anyway. So we can also use metadata for Gance or image to image translation. So what it does is that you either run it through command line or maybe CoLab in the tool will help to add a bunch of information about your model, like, for example, the input shape and what the model is about. At the same time, it will enable the code, which is an automatic code generation of Android proper for using of the TFI model.
So earlier I mentioned to you a lot of really manual process and all of that will be taken care of by code. So you don't have to do a lot of the manual work. So I wrote a two part tutorial to demonstrate those two features, and I'm sorry to demonstrate the code and metadata oncogene, but through the animal model binding from another studio. But the the first part is actually using another tool called TensorFlow Lite model maker. So what it does is that when you do the machine learning training, you actually have to worry a lot about the data loading and pre processing, which is really hard in machine learning. When I showed you the toy example of amnesty. You don't have to worry about it because that toy example comes with part of Carus. Just one line. You can load the data or when you use say there's the flow to point to shift. Yes, there's also ready-Made data. And if you use cargo, you know the data. You can also find a lot of very nice clean data. Right. But in the real world, when you work on your own machine learning model problems, that you may spend a lot of time in the data processing part. So the nice thing about the test flow model maker is that you don't have to worry about the data processing part or you have to do is just point to your folder of images or files that you have.
Right. So the input can be FDs, can be Kaggle, can be something you download from the Internet or pictures of your cat that you took. Right. Or you have to do is point to those folders and then just with a few lines of code there and you can create a model currently supporting image classification or text classification and its default to efficient net. But you can also change it to be a millionaire as Azmat or a particular model that you find on KFAB. So it's quite versatile. And another nice thing about this to flow like model maker is if you use the model maker to create a model by default, the metadata. Is already part of that TF Lite model that gets generated. So earlier I mentioned to you to get the metadata added to the TFI model, you can either run a Python script or you can write some coding CoLab to do it. Either way, you have to write a bunch of code, quite frankly, very boilerplate code to add that metadata, but tells of a like model making does that for you. So once you have a TF Lite model with that.
Metadata and when you're in, you do or you have to do is go through some path to say, import my model, and once you import that TFI model with metadata, all the manual stuff that I mentioned to you of placing the model manually under some folder, you don't have to do it anymore manually going to build that Greedo to update the dependency. You don't have to do it anymore. And to write all the code to load the top TF Lite model and run the inference. You don't have to do anything. You don't have to do much to use that yafai model other than just a few minutes going through the menu path and important done. And it will also give you some code snippets where you can copy and paste and directly use your model.
So now I'm going to switch gears to be talking about community, and so as I said, 2019, the model conversion got much better last year, but. You know what, a bunch of improvements was made, but we don't have too many into intensive of sample just yet. But now, fast forward to now. We actually have a dedicated report under the MGD. On MGD or on GitHub, you can find that there's a list of sample ideas and projects that you can help with, and I only took a screenshot of the top part of it, of the project, ideas and help need it. But on GitHub, you actually will find like in progress or kind of completed end to end tutorials. And you can help with the existing proposed ideas or you can propose new ideas and you can help with creating the model, converting the model.
You can help with writing iOS Android or implementing the model, say, on edge device.
And I want to talk briefly about Sofija Enemy, which is the first of the series of one festival like tutorials I just mentioned, and this is a collaboration, Michael, me collaborating with another machine learning SIIC and the con the vete from festival like team. And this is a state of art. If you look at this link that I put put here, you will see that the repo has like four thousand stars on the model, the paper repo and its image to image translation was against. However, even though its state of art knew the code, the bottle training code was written in terms of a one and.
So this and one tutorial actually gave the example of how to to save the model in terms of the 1.0 and then do the model conversion in terms of two point X and then run the inference in Python. As I said earlier, and then add the metadata and run the benchmarking tool and then finally implemented on Android. So I will put a link of the selfie to anime project so that you can check out the details here. And I also want to talk about my talk title is called Assumptive, like there's this awesome list that I put together with all the Tensorflow Lite models, apps, tutorials and learning resources. And you can check it out. The Machine Learning GDC and Android community really helped out with this report a lot. And we also have support from the broader community and also thanks to the Tensorflow Lite team for their help with this. So definitely check out this and a few things that I did not mention in this talk because I try to compress it. If you go to the awesome TF Lite repo, you will also find resources to things such as ML Kit. You can try implement your model TF model with like Media Pipe, with third party SDK case. Forget, for example, fritz.ai and just a whole lot more like books and videos, etc..
Some additional resources before I close my talk today, and I want to just give a shout out to machine learning GDE Sayak Paul.
He has this adventure in TF Lite repo and where he goes in depth to talk about quantization, saving and converting the models. And just today, GDG Kolkata and Hoi Lam from Google also gave an excellent talk on Android ML. I encourage you to check that out. I still need to watch it, but I'm sure he'll be talking about all the new features was ML including how to use Firebase and ML Kit etc. And now a blog post by Khanh LeViet called What's New in Tensorflow Lite from the Tensorflow Dev summit 2020. So he will also go in depth talking about a lot of the new Tensorflow Lite features from this year. So thank you, everyone, for listening to my talk. And please keep in touch, follow me on Twitter medium and GitHub to learn more about deep learning has a flow on device machine learning.
Ok, Margaret, thank you. Thank you very much.
Let's see a couple of questions here if you have a moment. The moneymaker sounds like a really cool piece of code with it. How long?
If somebody knows what they're doing already and Android, would you say it takes them to get like their first couple of models up and going?
Hmm. That is an excellent, excellent question. In fact, somebody has asked me that question. A android asked me a question and your expert asked me. And here's my answer at the there's also this teachable machine I think earlier might have got mentioned earlier. So I want to compare and contrast those two with the model maker. I will say, if you don't know machine learning, it is still a bit challenging. In particular, if you you don't know Python that well. Right. But if you know Python very well and you're taking some beginner machine learning classes, like you have some general idea of deep learning, image classification or text classification, it's very, very easy. But if you don't know Python, you have no clue about machine learning at all, then I think it's challenging. Then I would recommend you to use more of the tools. That's more like web interface, drag and drop, like teach washing. Or there's another thing. I didn't mention my talk, but I usually mention in my previous talk, which is called The Edge, what is it called the new vision edge as part of the Melchett or the Google cloud. So I think it's a vision edge, like a tool that I will find a link and share with you. So share with everybody. So basically use this for people who don't know machine learning at all, it's better to just use the web interface drag and drop.
Are you seeing a lot of I know you specialize in Android and things like that. Do you see a lot of Android developers trying to build machine learning into their applications?
I think so. Again, I think I'm going to add to my later I'll add to my slides. Is that so there's this tenso flag if you go to tenso flag for the community tab. So there are different communities, right? There's tons of photographs. There's documentation. You different one. So there's this community called a test TF Lite. So I'm part of that community. So I will see the questions that comes in. People asking questions. Right. And and from what I've seen last year versus now, I see so many more questions about. That's great. About how do I do X, Y, Z on Android, how do I do X, Y, Z model on Android, like ranging from people who know machine learning but who don't know a whole lot about mobile or just mobile developers. They want to put something on a fly model on Android.
Ok, we do have one question that just came in somebody else. And see Greg here is media hype working well now in Android? I don't know what media.
So I will yeah, I will take that question. If you follow me on medium, you will see one of the articles I wrote about is getting started was media hype on Android. When you say working well, I will say media hype works pretty well on Android. Like like if you go to the media GitHub, you look at their examples, they work extremely well. In fact, some of the complex model changing in and a market under the hood is actually powered by media hype. And media hype is excellent for you to work with things like multi hand tracking, like a really heavy duty kind of complex models. But if you're crashing is. Is it easy to get started for Andrew Defs? I will say not so much, because just just read my blog post because I don't think it's that Andrew had finally to get started. But does it work well, Andrew? Yes, it worked very well, Andrew.
Ok, great. One other thing I guess we probably should mention is your kind of Android specific, but a lot of this most of this still works in the Iaw community to that network.
And that's an excellent point. Yes. Because, you know, my talk I talk a lot about Android, I will say to fly in general, like if you create a TFI model. A lot of the process I mention should work with iOS. However, not everything. For example, the animal model binding. You know, the thing that I mentioned, you go to enter studio, you just click a button to say import my model. Right. That only works with Android at the moment. OK, so, yeah, there are certain things you work with. Some of the tools only work into Android, but in general the model format in general like it. Yes, they work with iOS and the work was Edutopia which the next is about.
Right. Right. So it sounds like maybe the Iowa site is a little Interrail and I know I know which one is a little more.
Script ish, then, let's say, I wish if you look at my example, actually, I didn't I didn't I didn't include a whole lot of examples. But if you go to awesome, do you have to look at examples or the official examples? You'll tend to see there's a lot of, like, image examples. We're still catching up with the text example. Yeah, it's kind of like that. We're maybe catching up a bit with the iOS part. But I will say right now the Texasville team has made a lot of progress in that area.
Ok, great, great.
See here, I don't see any other questions. I guess I would suggest everybody go and take a look at Margaret's awesome TF Lite repo. I've looked at it.
I don't claim to be an Android developer by any stretch of the imagination. I've done some, but not a lot. But it's helped me because I come from a python background able to move things over and to understand some of the little gotchas that you have when you work these small systems, such as, you know, dealing with memory and that sort of thing in that you have to think about maybe different than, you know, on a small phone device or even smaller.
Ok, but I guess. Ah, thanks, Margaret. Do you have anything like.