Monday, September 30, 2024

Can AI Watch YouTube for You? Automating Insights with Gemini & Google Apps Script

Have you ever wished you could automatically process YouTube research (24/7 nonstop), summarizing key insights and seamlessly integrating them into your existing knowledge base? πŸ€ͺ My favorite author Yuval Noah Harari recently wrote a new book Nexus, I was curious what he mentioned during his interviews on this book. (Spoiler: He says that silicon chips can create spies that never sleep or financers that never forget)

 

This tutorial demonstrates, how to build a research and archiving agent using my favourite Google Apps Script, YouTube Data API, and Google's powerful Gemini multimodal model (Gemini 1.5 Pro 002)

This automated system streamlines the process of discovering, summarizing, and archiving information from YouTube videos, allowing you to focus on analysis and synthesis rather than manual transcription and summarization. 

The system is composed of two primary agents the Researcher and the Librarian. The Researcher searches for relevant videos and saves their metadata. The Librarian processes these videos, generating summaries with Gemini, and appends them to a master Google Docs document. 


1. Create a new Google Apps Script project: 
Navigate to https://script.new/ to create a new Google Apps Script project. 

2. Manifest Configuration: In your Apps Script project, go to Project settings and enable Show "appsscript.json" manifest file in editor. Then, replace the contents of the manifest file with the following scopes: 

 This manifest grants the script necessary permissions to interact with various Google services, including
- YouTube,
- Google Cloud (for Gemini in Vertex AI), 
- Google Docs, and Google Sheet. 

Google Cloud Project: If you don't already have one, create a πŸ†• Google Cloud project at  https://console.cloud.google.com/projectcreate. 


3. Enable the Vertex AI API and YouTube API 


 4. Configuration Constants: Add the following constants to your script, replacing the placeholders with your actual values: 


5. πŸ‘¨‍πŸ”¬ The Researcher Agent (Agent Researcher) 
This agent searches YouTube for videos based on a query and stores relevant information in a Google Sheet. This acts as our "video database".
 


  6. πŸ€“The Librarian Agent (Agent Librarian) 
The Librarian iterates through the video database, summarizes new videos using Gemini, and appends these summaries to a Google Doc. 




7. Calling Gemini API in Vertex AI
 This function handles the interaction with the Gemini API in Vertex AI. It sends requests and parses responses. 


8. Running the System After setting up the project and configuring the necessary parameters, you can run the agents. First, execute agentResearcher() to populate the video database. Then run agentLibrarian() to process and summarize videos. 

This setup leverages the power of LLMs like Gemini to automate time-consuming research tasks, allowing you to efficiently curate and integrate knowledge from YouTube videos directly into your workflowRemember to manage your Google Cloud credits appropriately, as using the Gemini API will incur costs.

Sunday, March 31, 2024

Smart replacing images in Google Slides with Gemini Pro API and Vertex AI

Surely, you have also experienced having a presentation in which you needed to replace old content with new. Replacing text is very simple because you just need to use the Replace function, and you can either do it in the Google Slides user interface.

The problem arises when you need to replace one image with another, for example, if your corporate logo is updated to a new graphic design or if one of your favorite cloud services updates its icons (Gmail, blink blink ;-) It's still somewhat bearable with one presentation, but what do you do when, like me, you have thousands of Google Slides files on your Google Drive?


Fortunately, there are large language models and, specifically, multimodal models that allow input prompts to include images in addition to text. Specifically, with Gemini Pro, you can have up to 16 such images as input. And then the old saying applies that one picture is worth a thousand words :)

I used Gemini Pro for exactly this use case in the Vertex AI service with integration into Google Apps Script, which could connect to my presentation, go through all the slides, and if there was an image containing the old logo, it replaced it with the new logo. I will show you how you can replicate such a procedure yourself, and all you need for it is just a Google Cloud account."


1. Create a new Google Apps Script project https://script.new/


2) Go to Project settings -> and click the checkbox Show "appsscript.json" manifest file in editor


3) Copy the manifest.json below


4) Prepare a Google Cloud project and if you don't have one, create one here: https://console.cloud.google.com/projectcreate 


Then enable the Vertex AI API https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com



The Gemini Pro Vision API takes as input data that consists of parts (parts as an array), where each item can be either text or binary data (either embedded or embedded via a URL link)

https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini


We compose our prompt as you might be used to, only with the difference that we also load two images and tell it which is the old and which is the new logo. We will use the a-few-shot learning technique for examples.




Finally, all that's left is to create a function that can load all the slides in a presentation, load all the images in each slide, and then send each image to the Gemini Pro API to see if it's an old or new image. If it's an old image, then replace it directly in the presentation with the new image.
And that's all. Now you just need to run the getSlides() function, which will replace all the old Gmail logos with the new ones. Of course, the script can be modified to go through all your files. Or better yet, to go through all the files in the company through domain-wide delegation

Google Cloud credits are provided for this project
#GeminiSprint

Tuesday, August 23, 2022

List all GCP regions with Google (unofficial) API endpoint

I have several scenarios where I needed to list all GCP regions. (e.g. Cloud Billing API https://cloud.google.com/billing/docs/reference/rest).




I was surprised that there is no API for that. 

When you try to search "list of GCP regions" you will end with one of the top results to documentation e.g. https://cloud.google.com/compute/docs/regions-zones or https://cloud.google.com/about/locations. It is not suitable for programmatic access.

I have found recently an endpoint (not API!) with the list of IP ranges for each GCP regions

https://www.gstatic.com/ipranges/cloud.json


That was the last piece of the puzzle to create my desired function.

Here is a snippet for Google Apps Script:


Friday, February 25, 2022

Get filtered rows in Google Sheets with Google Apps Script

Google Sheets allows you to filter data in grid.

Sometimes you need to filter and use data with API.
https://developers.google.com/sheets/api

 Google Sheets API has endpoint with array rowMetadata.

You can iterate over the all rows and check if property hiddenByFilter is setup.



Here is a snippet how to get filtered rows with Google Apps Script