EXPLORE OTHER AWesome
PaLM 2 is Google's next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than previous state-of-the-art LLMs. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. This article offers a quick and straightforward method for leveraging the PaLM 2 API to extract knowledge and ask questions from text.
Watch videos of some of the world's top AI experts discuss everything from Tensorflow Extended to Kubernetes to AutoML to Coral.
In this talk, Hannes is providing insights into Machine Learning Engineering with TensorFlow Extended (TFX). He introduces how TFX for machine learning pipeline tasks and how to orchestrate entire ML pipelines with TFX. The audience learns how to run ML production pipelines with Kubeflow Pipelines, and therefore, free the data scientist's time from maintaining production machine learning models.
Solving a data science problem usually requires multiple steps. These steps can include extracting and transforming data, training a model, and deploying the model into production. In this session, we'll discuss how to specify those steps with Python into an ML pipeline. We'll show how to create a Kubeflow Pipeline, a component of the Kubeflow open-source project. The audience will learn about how to integrate TensorFlow Extended components into the pipeline, and how to deploy the pipeline to the hosted Cloud AI Pipelines environment on Google Cloud. The key takeaway is how to improve reuse and reproducibility of the machine learning process.
Get started with your machine learning adventures by using state-of-the-art tools. We will talk about how to utilize the amazing work done by others to jump start your projects. We will also talk about making them more scalable and getting to a solution that can be used for production as well.
This talk provides an overview of TensorFlow Lite and its awesome tools and resources to help you create intelligent apps. I will walk through end-to-end computer vision examples with TFLite: from model training, conversion, optimization to model deployment on mobile and edge devices.
Coral is a complete toolkit to build products with local AI. Our on-device inferencing capabilities allow you to build products that are efficient, private, fast and offline. In this talk, we will introduce Coral and walk participants through current applications. In the second part, we will do some hands on demos to run inference using Coral devices.
From DevFest, this video discuss why this type of data is exciting, how we can make use of this data in a secure and private way through federated computations, specifically federated learning and how to experiment with these concepts using Tenorflow Federated or TFF.