Skip to content

S3 Relay

GLM-4.6V, Qwen3.6, and MiMo-V2 Omni all accept direct video URLs, but base64-encoding a local file is capped at 10–12 MB. When a file exceeds that limit, the server first tries to fall back to Gemini or Kimi (which can accept larger uploads), and then falls back to frame-based analysis as a last resort.

For the best quality with large local videos — especially with GLM, Qwen, Mimo as your primary provider — configure the S3 relay: the server uploads the file to S3 automatically and passes a presigned URL to the provider, bypassing the base64 limit entirely.

GLM, Qwen, and MiMo require the serving endpoint to provide Content-Length and Content-Type headers alongside the video. AWS S3 presigned URLs include both headers automatically.

Before setting up the S3 relay, you’ll need an AWS account and access credentials.

  1. Go to aws.amazon.com and click Create an AWS Account.
  2. Enter your email address, a password, and an AWS account name.
  3. Choose the Basic Support — Free plan (sufficient for S3 relay usage).
  4. Fill in your contact and billing information. A valid credit or debit card is required, but S3 usage within the free tier costs nothing.
  5. Verify your identity via phone call or SMS.
  6. Once confirmed, sign in to the AWS Management Console.

2. Get your AWS Access Key ID and Secret Access Key

Section titled “2. Get your AWS Access Key ID and Secret Access Key”

The S3 relay needs programmatic access to your S3 bucket. You’ll create an IAM user with limited permissions:

  1. In the AWS Console, search for IAM in the top search bar and open the IAM dashboard.
  2. Click Users in the left sidebar, then Create user.
  3. Enter a user name (e.g., video-mcp-s3) and click Next.
  4. Under Permissions options, select Attach policies directly.
  5. Click Create policy — this opens a new tab:
    • Switch to the JSON tab and paste the following policy:

      {
      "Version": "2012-10-17",
      "Statement": [
      {
      "Effect": "Allow",
      "Action": [
      "s3:PutObject",
      "s3:GetObject",
      "s3:DeleteObject",
      "s3:ListBucket"
      ],
      "Resource": [
      "arn:aws:s3:::your-globally-unique-bucket-name",
      "arn:aws:s3:::your-globally-unique-bucket-name/*"
      ]
      }
      ]
      }
  • Replace your-globally-unique-bucket-name with your actual bucket name (you’ll create it in the next step).
  • Click Next, give the policy a name like VideoMcpS3Access, then Create policy.
  1. Go back to the user creation tab, click the refresh icon, search for VideoMcpS3Access, select it, and click NextCreate user.
  2. Open the newly created user, go to the Security credentials tab.
  3. Under Access keys, click Create access key.
  4. On Access key best practices & alternatives, review the guidance. If you still need a long-term access key for local development, choose Other or the closest equivalent programmatic/local-code option shown in your console, then click Next.
  5. Optionally add a description tag, then click Create access key.
  6. Copy and save the Access Key ID and Secret Access Key — you won’t be able to see the secret key again after closing this dialog.

3. (Optional) Install and configure the AWS CLI

Section titled “3. (Optional) Install and configure the AWS CLI”

The AWS CLI is only needed if you want to create buckets from the terminal or use the ~/.aws/credentials method instead of environment variables. If you plan to add credentials directly to your MCP env block, you can skip this step.

  • Windows: Download the installer from aws.amazon.com/cli or run:
    Terminal window
    winget install Amazon.AWSCLI
  • macOS:
    Terminal window
    curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
    sudo installer -pkg AWSCLIV2.pkg -target /
  • Linux:
    Terminal window
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    sudo ./aws/install

Run the following command and paste your Access Key ID and Secret Access Key when prompted:

Terminal window
aws configure

You’ll be asked for:

  • AWS Access Key ID — paste the key you saved earlier
  • AWS Secret Access Key — paste the secret key you saved earlier
  • Default region name — enter your preferred region (e.g., us-east-1)
  • Default output format — press Enter for json

This stores your credentials in ~/.aws/credentials and ~/.aws/config, which the MCP server reads automatically.

If you installed the AWS CLI:

Terminal window
aws s3 mb s3://your-globally-unique-bucket-name

Or create it manually in the S3 Console:

  1. Open the S3 dashboard and click Create bucket.
  2. Enter a globally unique bucket name (e.g., your-globally-unique-bucket-name). S3 bucket names must be unique across all AWS accounts worldwide, so you may need to add a suffix like your initials, a year, or a random string.
  3. Choose the AWS Region you want to use for the bucket. This should match AWS_REGION in your MCP config or AWS CLI profile.
  4. Leave Block all public access enabled. The bucket does not need to be public because the server uses presigned URLs.
  5. Keep the default Object Ownership setting (ACLs disabled / Bucket owner enforced).
  6. Leave the remaining settings at their defaults unless your organization requires something different, then click Create bucket.
{
"servers": {
"videoMcp": {
"type": "stdio",
"command": "video-context-mcp",
"env": {
"AWS_S3_BUCKET": "your-globally-unique-bucket-name",
"GEMINI_API_KEY": "your-gemini-key"
}
}
}
}

The server resolves AWS credentials in this order — you only need to configure one:

  1. Environment variables — Add directly to your MCP env block (no AWS CLI needed):

    "AWS_S3_BUCKET": "your-globally-unique-bucket-name",
    "AWS_ACCESS_KEY_ID": "AKIA...",
    "AWS_SECRET_ACCESS_KEY": "your-secret-key",
    "AWS_REGION": "us-east-1"
  2. ~/.aws/credentials — If the AWS CLI is configured, credentials are picked up automatically. Only AWS_S3_BUCKET is needed in your MCP config:

    "AWS_S3_BUCKET": "your-globally-unique-bucket-name"
  3. IAM instance role / ECS task role — For AWS-hosted environments.

Every time you analyze a local video (or a platform download like YouTube) with GLM, Qwen, or MiMo:

  1. The server detects the file is too large for base64 encoding.
  2. The file is uploaded to s3://your-globally-unique-bucket-name/video-mcp-relay/<uuid>.<ext>.
  3. A presigned URL (valid for 1 hour) is passed to the AI provider.
  4. The provider downloads the video directly from S3.
  5. The object is kept in the bucket for reuse within the same session.

Relayed S3 objects are deleted automatically when the MCP server session ends. Orphaned objects from crashed sessions are swept at next startup.

To keep objects in the bucket for reuse across sessions (useful for large files you analyze repeatedly):

"AWS_S3_RELAY_CLEANUP": "false"

AWS S3 free tier covers 5 GB storage + 20K GET requests/month for 12 months. After the free tier, storage costs roughly $0.023/GB/month — negligible for most use cases.

You can also pass a presigned URL directly to any tool without configuring the relay:

Terminal window
aws s3 cp my-video.mp4 s3://your-globally-unique-bucket-name/my-video.mp4
aws s3 presign s3://your-globally-unique-bucket-name/my-video.mp4 --expires-in 3600
# → https://your-globally-unique-bucket-name.s3.amazonaws.com/my-video.mp4?X-Amz-...

Pass the resulting URL as videoPath to analyze_video, summarize_video, or any other tool.

VariableDescriptionDefault
AWS_S3_BUCKETS3 bucket name for automatic relay
AWS_ACCESS_KEY_IDAWS access key ID
AWS_SECRET_ACCESS_KEYAWS secret access key
AWS_REGIONAWS region (e.g. us-east-1)
AWS_S3_RELAY_CLEANUPSet false to keep relayed objects after session endstrue