How to Extract Metadata from YouTube Videos
YouTube video metadata includes title, description, tags, view counts, thumbnails, and dozens of other structured fields. This guide covers four practical ways to extract that data: the YouTube Data API v3, the yt-dlp command-line tool, custom Python scripts, and browser-based viewers.
What YouTube Video Metadata Includes
YouTube stores structured information about every video on the platform. This goes well beyond the title and description visible on the watch page. A complete metadata record includes technical details, engagement metrics, distribution settings, and topic classifications.
Here's what you can extract from a single YouTube video:
Identification fields: video ID, channel ID, channel title, publish date and time, and numeric category ID.
Content fields: title, full description text (including timestamps and hashtags), tags (up to 500 characters total), thumbnails in four sizes (default, medium, high, and maxres), default language, and localized title variants.
Technical fields: duration in ISO 8601 format, dimension (2D or 3D), definition (HD or SD), caption availability, licensed content flag, content rating, and projection type.
Engagement fields: view count, like count, comment count, and favorite count.
Distribution fields: embeddable flag, public stats viewable flag, region restrictions, and age-gate status.
Topic fields: Wikipedia-linked topic categories that YouTube assigns automatically, such as "Music," "Gaming," or "Science."
The YouTube Data API v3 can return these fields in structured JSON. Whether you need metadata for competitive analysis, content auditing, archival, or feeding a recommendation engine, the method you choose depends on your volume requirements and technical setup.
Four approaches cover most use cases: the official YouTube Data API for structured batch access, yt-dlp for API-free command-line extraction, Python scripts for custom pipelines, and browser tools for quick one-off lookups.
Method 1: YouTube Data API v3
The YouTube Data API v3 is Google's official REST API for accessing video data. It returns clean JSON, supports batching up to 50 videos per request, and gives you fine-grained control over which metadata groups you receive.
Setting Up API Access
- Go to the Google Cloud Console and create a new project.
- In the API Library, search for "YouTube Data API v3" and click Enable.
- Under Credentials, create an API key. For read-only metadata extraction, an API key is all you need. OAuth is only required for accessing private videos or modifying data.
Making a Request
The videos.list endpoint accepts comma-separated video IDs and returns metadata organized by "parts." Each part is a group of related fields:
- snippet: title, description, tags, thumbnails, publish date, channel info, category
- contentDetails: duration, dimension, definition, caption status, region restrictions
- statistics: view count, like count, comment count
- topicDetails: Wikipedia-linked topic categories
- status: upload status, privacy status, license, embeddable flag
Here's a curl request that pulls all the common parts at once:
curl "https://www.googleapis.com/youtube/v3/videos\
?part=snippet,contentDetails,statistics,topicDetails\
&id=dQw4w9WgXcQ\
&key=YOUR_API_KEY"
The response is a JSON object with an items array. Each item contains the requested parts as nested objects. You can request multiple parts in one call without extra quota cost.
Quota Management
Every Google Cloud project gets 10,000 quota units per day by default. A videos.list call costs 1 unit regardless of how many parts you include. Since you can batch up to 50 video IDs per request, a single unit covers metadata for 50 videos. At that rate, 10,000 units gets you metadata for 500,000 videos per day.
Avoid the search.list endpoint when you already have video IDs. Search costs 100 units per request. Instead, use playlistItems.list (1 unit) to collect video IDs from a channel or playlist first, then batch them into videos.list calls.
What the API Doesn't Return
The Data API does not include full caption text or transcripts. For captions, you need the separate YouTube Captions API, which requires OAuth and only works for videos you own or manage. The API also returns only current engagement counts, not historical data. If you need view count trends over time, you'll have to poll periodically and store snapshots yourself.
Method 2: yt-dlp for Metadata Without an API Key
yt-dlp is an open-source command-line tool that extracts video information directly from YouTube pages. It requires no API key and no Google Cloud account. The --dump-json flag outputs a JSON object containing every metadata field yt-dlp can parse, which often includes fields the official API doesn't expose.
Basic Extraction
Pull metadata for a single video:
yt-dlp --dump-json "https://www.youtube.com/watch?v=dQw4w9WgXcQ" > metadata.json
The resulting JSON includes the video title, description, upload date, view count, like count, duration, tags, categories, thumbnail URLs, all available download formats with resolutions and bitrates, automatic caption track URLs, chapter markers, and uploader information.
Extracting Specific Fields
If you only need a few fields, the --print flag is cleaner than parsing full JSON:
yt-dlp --print title --print upload_date --print view_count \
"https://www.youtube.com/watch?v=dQw4w9WgXcQ"
Bulk Extraction from Playlists
yt-dlp handles playlists natively. For a quick listing of video IDs and titles:
yt-dlp --dump-json --flat-playlist \
"https://www.youtube.com/playlist?list=PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf" \
> playlist_listing.jsonl
Each line of the output contains one JSON object per video. The --flat-playlist flag skips full page loads and just extracts listing data, which runs much faster.
For complete metadata on every video in the playlist, drop the flag:
yt-dlp --dump-json \
"https://www.youtube.com/playlist?list=PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf" \
> full_metadata.jsonl
Archiving Metadata Alongside Downloads
If you're downloading videos and want to keep the metadata alongside them, add --write-info-json:
yt-dlp --write-info-json -o "%(title)s.%(ext)s" \
"https://www.youtube.com/watch?v=dQw4w9WgXcQ"
This saves a .info.json file next to each downloaded video, creating a permanent metadata record.
When to Choose yt-dlp Over the API
yt-dlp works best when you don't want to deal with Google Cloud credentials, when you need data the API doesn't expose (like available formats and auto-generated captions), or for small-to-medium extraction jobs. The tradeoff is rate limiting: YouTube may throttle or block requests if you hit the site too fast. Adding --sleep-interval 2 between requests helps, but for thousands of videos, the API's batch endpoints are more reliable.
Turn Extracted Video Metadata into Searchable Data
Upload your YouTube metadata exports to Fast.io and let Metadata Views build a queryable database automatically. Start with 50 GB free, no credit card required.
Method 3: Custom Python Scripts
Python gives you the flexibility to combine API calls with yt-dlp, handle errors, and write results to whatever format your downstream tools expect. Here's a practical starting point.
API-Based Extraction
Install the official client library:
pip install google-api-python-client
This script batches video IDs in groups of 50 to minimize quota usage:
from googleapiclient.discovery import build
import json
API_KEY = "your_api_key_here"
youtube = build("youtube", "v3", developerKey=API_KEY)
def get_video_metadata(video_ids):
results = []
for i in range(0, len(video_ids), 50):
batch = video_ids[i:i + 50]
response = youtube.videos().list(
part="snippet,contentDetails,statistics,topicDetails",
id=",".join(batch)
).execute()
results.extend(response.get("items", []))
return results
video_ids = ["dQw4w9WgXcQ", "jNQXAC9IVRw", "9bZkp7q19f0"]
metadata = get_video_metadata(video_ids)
with open("video_metadata.json", "w") as f:
json.dump(metadata, f, indent=2)
print(f"Extracted metadata for {len(metadata)} videos")
Each batch of 50 costs 1 quota unit, so 10,000 videos costs just 200 units out of your daily 10,000 allowance.
Using yt-dlp from Python
When you need fields the API doesn't provide, call yt-dlp as a subprocess:
import subprocess
import json
def get_ytdlp_metadata(video_url):
result = subprocess.run(
["yt-dlp", "--dump-json", video_url],
capture_output=True, text=True
)
if result.returncode != 0:
return None
return json.loads(result.stdout)
meta = get_ytdlp_metadata(
"https://www.youtube.com/watch?v=dQw4w9WgXcQ"
)
if meta:
print(meta["title"], meta["view_count"], meta["upload_date"])
Exporting to CSV
For spreadsheet analysis, convert the API output to CSV:
import csv
fields = [
"id", "title", "publishedAt",
"viewCount", "likeCount", "duration"
]
with open("metadata.csv", "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for video in metadata:
writer.writerow({
"id": video["id"],
"title": video["snippet"]["title"],
"publishedAt": video["snippet"]["publishedAt"],
"viewCount": video["statistics"].get("viewCount", "0"),
"likeCount": video["statistics"].get("likeCount", "0"),
"duration": video["contentDetails"]["duration"],
})
These scripts handle the common cases. You can extend them with retry logic for rate-limited requests, filters for deleted or private videos, and database inserts instead of file writes depending on your workflow.
Method 4: Browser Tools for Quick Lookups
When you need metadata for a single video and don't want to write code or install anything, browser-based approaches work in seconds.
Online Metadata Viewers
Several free tools let you paste a YouTube URL and see the full metadata breakdown. YouTube Metadata by Matt Walsh displays the raw API response, including tags, exact publish timestamp, category, and thumbnail URLs in all sizes. Tools like Toolixr and WildAndFree offer similar functionality with cleaner formatting.
These viewers use the YouTube Data API behind the scenes, so they return the same fields you'd get from a direct API call. The advantage is zero setup: no API key, no code, no installation.
Inspecting Metadata with Browser DevTools
YouTube embeds video metadata in JavaScript objects that load with the watch page. You can access them directly:
- Open a YouTube video in your browser.
- Open DevTools (F12 on Windows/Linux, Cmd+Option+I on Mac).
- Go to the Console tab.
- Type
ytInitialPlayerResponseand press Enter.
This returns a JavaScript object containing the video's full player configuration, including title, description, duration, view count, thumbnail URLs, caption track URLs, and content rating details. Some of these fields aren't visible in the page UI.
For the broader page data (including related videos and recommendations), try ytInitialData in the console instead.
Browser Extensions
Extensions like vidIQ and TubeBuddy overlay metadata directly on the YouTube watch page. They show tags, estimated earnings, SEO scores, and competitive metrics. Both were originally built for creators optimizing their own videos, but they're equally useful for competitive research. Each has a free tier with basic metadata access.
Browser tools are best for spot-checking individual videos or quick competitive research. For anything beyond a handful of videos, the API or yt-dlp methods save significant time.
Storing and Querying Extracted Metadata
Pulling metadata out of YouTube is the first half of the job. The second half is organizing it so you can actually search, filter, and share the results.
Local Storage Options
For small projects, JSON files are fine. For anything you want to query, SQLite gives you SQL without running a server:
CREATE TABLE videos (
id TEXT PRIMARY KEY,
title TEXT,
view_count INTEGER,
like_count INTEGER,
published_at TEXT,
duration TEXT,
tags TEXT
);
SELECT title, view_count
FROM videos
WHERE view_count > 100000
ORDER BY published_at DESC;
For larger datasets or team access, PostgreSQL adds better indexing, full-text search, and concurrent connections.
Cloud Workspaces for Team Collaboration
When multiple people need access to the same metadata, a shared cloud workspace removes the friction of emailing CSV files or syncing database credentials.
Fast.io's Metadata Views turn uploaded files into a queryable database without writing a schema by hand. Upload your extracted JSON or CSV files to a workspace, describe the fields you want (video title, view count, upload date, tags, category), and Metadata Views builds a typed schema and populates a sortable, filterable spreadsheet automatically. You can add new columns later without reprocessing the original files.
This is particularly practical when you're combining YouTube metadata with other data sources. Drop video metadata exports, analytics reports, and content calendars into the same workspace, and Metadata Views extracts structured fields from each file type. Your team can filter, sort, and export the combined data from the browser.
For automated workflows, Fast.io's API and MCP server let scripts and AI agents upload extraction results to a workspace, trigger structured extraction, and query results programmatically. The free plan includes 50 GB of storage and 5,000 AI credits per month, which covers most video metadata archival use cases without needing a credit card.
Choosing the Right Storage Approach
For solo research projects, SQLite or flat JSON files keep things simple. For team workflows where non-technical stakeholders need to browse and filter metadata, a cloud workspace with automatic schema extraction saves everyone from learning SQL. For high-volume pipelines feeding analytics dashboards, a managed database like PostgreSQL or BigQuery gives you the query performance and scalability you'll need.
Frequently Asked Questions
How do I get metadata from a YouTube video?
The fast way is to use an online metadata viewer like mattw.io/youtube-metadata. Paste the video URL and you'll see the title, description, tags, view count, publish date, and category. For programmatic access, use the YouTube Data API v3 with a free API key, or install yt-dlp and run `yt-dlp --dump-json VIDEO_URL` from the command line.
Can I extract YouTube video tags?
Yes. The YouTube Data API v3 returns tags in the snippet.tags field when you call the videos.list endpoint. Tags are also included in the JSON output from yt-dlp's --dump-json flag. Note that not all videos have tags set by the uploader, and YouTube does not publicly display tags on the watch page, so an API or CLI tool is the only way to see them.
How do I use the YouTube Data API for metadata?
Create a Google Cloud project, enable the YouTube Data API v3, and generate an API key. Then call the videos.list endpoint with the video ID and the parts you want (snippet, statistics, contentDetails). The API returns JSON with all requested fields. Each call costs 1 quota unit, and you can batch up to 50 video IDs per request.
Is there a free YouTube metadata extractor?
Yes. yt-dlp is a free, open-source command-line tool that extracts metadata without any API key or account. Online tools like mattw.io/youtube-metadata and Toolixr also let you view metadata for free through a browser. The YouTube Data API itself is free with a daily quota of 10,000 units, which covers metadata for up to 500,000 videos per day when you batch requests.
What metadata fields does YouTube store for each video?
YouTube stores dozens of metadata fields per video. These include identification data (video ID, channel ID, publish date), content data (title, description, tags, thumbnails, language), technical data (duration, resolution, caption availability), engagement data (views, likes, comments), and distribution settings (embed permissions, region restrictions, age gates). The exact fields available depend on which extraction method you use.
How do I extract metadata from an entire YouTube playlist?
With yt-dlp, run `yt-dlp --dump-json PLAYLIST_URL` to get full metadata for every video. Add --flat-playlist for a faster listing with basic fields only. With the YouTube Data API, call playlistItems.list to get video IDs from the playlist (1 quota unit per 50 items), then batch those IDs into videos.list calls for complete metadata.
Related Resources
Turn Extracted Video Metadata into Searchable Data
Upload your YouTube metadata exports to Fast.io and let Metadata Views build a queryable database automatically. Start with 50 GB free, no credit card required.