THE BITESITE BLOG
Servingvideotoauthenticatedusers

Serving Videos to Authenticated Users using Amazon AWS and Ruby on Rails

coding ruby on rails amazon aws video software

Update: The original posting of this article left out two key points that have to do with serving your videos over SSL and allowing your second Cloudfront instance to access your S3 MP4 bucket. Scroll down to the section on 'Secondary Cloudfront instance' for the details.

So we recently got a project to do something we have never done before: create a web application that would only serve videos to authenticated users.

It's a pretty common use case we run into a lot: if you're logged in as someone who is authorized to view the videos, you can click on a page and watch that video. If you're not, then you can't view that page, NOR can you copy the link to that video to watch that video. Now as common as this is to use, it isn't super common to develop. At least not for us.

Yes, you can use streaming services like Vimeo etc., but what if your client wants their own custom solution. This is the challenge we were faced with.

And it turns out, there were really two things we had to solve:

  • Serving streaming video on-demand
  • Securing the streaming video to only authenticated users

(Just a small note, usually video-streaming breaks down into two categories: live or on-demand. Live streaming is when you are shooting video and streaming it to users' devices at the same time. On-demand is like traditional YouTube, Vimeo, or Netflix - where your users' are watching pre-recorded video. For our project, we were implementing on-demand video).

Ruby on Rails is our development web framework of choice, and AWS S3 is our asset storage of choice. So we knew the solution involved some combination of those two.

Now, the thing is - a good chunk of this is documented around the web, but I didn't come across a post that showed the end-to-end solution for Rails. So here you go.

Amazon Web Services

If you don't already know, Amazon has this entire part of its business called Amazon Web Services separate from its consumer facing online shopping platform. Amazon Web Services is a collection of services that Amazon provides to developers to help them develop applications. They have everything from virtual servers, to databases, to media encoding, to storage systems. Each one of these has their own name. For example, the storage solution is called "S3". You can do some more reading on Amazon Web Services on their official home page.

AWS Answers

When I started this project, I knew nothing about streaming video to devices. I originally thought the easiest thing to do would be to upload a video to Amazon S3, and just have a link to that video in the HTML code. As I started to research hosting video on AWS though, it turns out that is not a good solution. With that solution, you force every user to download the entire video, scrubbing back and forth is not ideal, and it's not true streaming in the sense that it's not downloading small packets of the video.

The preferred solution involves encoding your video into streamable chunks and serving those to the customer and only if their browser doesn't support streaming do you serve them the entire file. This solution also helps with dynamically changing the quality depending on the user's connection speed. So the first thing to consider is encoding your video into multiple formats that browsers support to optimize the video viewing experience.

On top of that, Amazon recommends that you consider using their CDN service, Cloudfront, to serve your assets to your users. What a CDN does is effectively copy your resources to multiple servers so that when a browser accesses your resource, it grabs it from a server that geographically closer to it. This ensure fast response and load balancing between all your different users. So the second thing to consider is setting up Cloudfront to serve your videos.

To do all this, there are actually a lot of moving parts and a lot of complexity involved. The great thing is, Amazon actually does supply all the services needed to execute this, but the question is how do you set it all up?

Well that's where AWS Answers comes in.

AWS Answers is a collection of solutions to common problems. So let's say you wanted to build a "Internet of Things" solution. AWS Answers has a solution for that to setup everything you need to get up and running. Let's say you wanted to create a backend server for a mobile app - well there is an AWS answer for that as well.

AWS Answers comes with documentation such as guides and FAQs about how to setup everything you need. For example, it may tell you, "You should set up a AWS Dynamo DB and a AWS S3 bucket...". But the coolest thing about AWS Answers, is the solutions also come with automatic deployment scripts. This means that you can click a button, fill out a couple of fields, and then boom - AWS automatically sets up everything you need. It's pretty amazing.

And guess what? There is an AWS Answer for On-demand Video Streaming.

Video On-Demand on AWS

So this article won't explain all the details of the "Video On-Demand on AWS" AWS Answer, but I will break down the basics of all the moving parts. When you deploy this AWS Answer, here are some of the major parts that get setup for you:

  • S3 Buckets (both for the original video files, and the transcoded files)
  • Dynamo DB (a database to keep track of your video files)
  • MediaConvert (to trancode your actual videos)
  • CloudFront (to serve your files to your users)

The AWS Answer actually sets up Lambda functions and Step functions as well, but I want to concentrate on the major parts in this article. You can read about everything else on the AWS Answer Page.

The basic workflow is this.

  1. You upload a video file into one of the S3 buckets that the AWS Answer set up for you (the source bucket).
  2. The bucket is setup to automatically run a transcode job on any video files in that bucket.
  3. The transcode job starts to transcode your video file into appropriate streaming formats.
  4. The transcode job drops its completed files (transcoded video files and thumbnails) into another S3 bucket (the destination bucket) that the AWS Answer set up for you.
  5. The newly transcoded video files are now available to the Cloudfront instance that the AWS Answer set up for you.
  6. You put the Cloudfront URL to your video into your code.

That is the basic setup. So once the AWS Answer is setup, you literally just drop files into the source S3 bucket, AWS does the rest, and provides you with a URL for your video that you can put into your code.

All is good? We're done right?

Not quite.

Tweaks to "Video On-Demand on AWS"

So the Amazon AWS answer is great, but it's not exactly perfect for everybody, and it definitely was not perfect for us. As we went down the road of putting the Cloudfront links on our code, we ran into a lot of issues and it turns out the solution was to tweak some of the services and configuration that the AWS Answer setup for us.

Here are the two major changes we made.

H.264 Encoding

So by default, the AWS Answer setup up a couple of encodings for the videos. When you drop a video into the source S3 bucket, AWS transcodes your file into multiple formats. The first set of these formats are all streaming formats, and then it also transcodes your video into a "single-file" format for browsers that don't support the streaming formats.

For streaming, the AWS Answer sets up encodings for HLS and DASH of various resolutions. For the "single-file" format, the AWS Answers sets up encodings for H.265 HEVC of various resolutions. If you're curious, you can actually go into your "Media Convert" page, and click on "Output Presets" to see this list:

Keep in mind, that if browsers support the streaming formats, they don't care about the single-file format. It's only the browsers that don't support the streaming formats that care about the single-file format.

The streaming formats are actually great and work with browsers like Safari. The problem is with the "single-file" format. Most browsers that don't support streaming formats, like Chrome, don't support HEVC H.265 either. So our backup single-file format wouldn't work.

So the first change we made to the default solution was change the MP4 output presets.

We changed the Video codec to "MPEG-4 AVC (H.264)", left everything else as default, and filled in the bitrate to be the same as before. "8500000" for 1080p and "6500000" for 720p. We also updated the name of the preset output so that it said "AVC" instead of "HEVC".

Now files that were dropped into the source S3 bucket would get converted to the HLS and DASH streaming formats as well as a H.264 single-file format.

Secondary Cloudfront Instance

Out of the box, the AWS Answer sets up one Cloudfront instance to point to the S3 destination bucket. To be more precise, it's setup to point to the S3 bucket used for the streaming video files output from the transcoding job. Your single-file H.264 files actually get put into an entirely different S3 bucket.

Since we wanted to serve both, we had to set up one more Cloudfront instance that pointed to the S3 MP4 bucket.

So our set up in the end

  1. Cloudfront Instance 1 pointed to S3 Bucket for Streaming Files (HLS, DASH). The name of this bucket has "abrdestination" in its name, ABR standing for "adaptive bitrate".
  2. Cloudfront Instance 2 pointed to S3 Bucket for Single-file Files (H.264). The name of this bucket has "mp4destination" in its name.

Update: Setting up Access from your Secondary Cloudfront Instance

When the AWS Answer setup your 'mp4destination' bucket, it by default blocks access. So you need to allow access from your Cloudfront Instance 2 to your 'mp4desination' bucket. By default, the AWS Answer already sets this up between Cloudfront Instance 1 and the 'abrdestination' bucket. To set up this access, we actually need what's called a 'Origin access identity'. Luckily, we can just use the one that was already set up between the Cloudfront Instane 1 and the 'abrdestination'. If you log into AWS, go to your Cloudfront console, you'll see on the left 'Origin Access Identity'. If you click on it, you'll see the 'VOD on AWS' user that was set up to allow access between the Cloudfront Instance 1 and the 'abrdestination'. Again, we are going to re-use this for Cloudfront Instance 2.

To do this, click on your Cloudfront Instance 2 and then click on the 'Origin' tab. Check the mp4 origin, and then click the 'Edit' button. Under 'Origin Access Identity', choose 'Use Existing' and select the 'VOD on AWS' user. You'll also want to select 'Yes, Update Bucket Policy' for 'Grant read permissions on Bucket'. Click 'Yes, Edit'. Saving changes like this usually takes a while to deploy so monitor the main Cloudfront console to see when the changes have taken effect.

You can also check out the Bucket Policy of the mp4 bucket to make sure that Amazon correctly added permissions for the 'VOD on AWS' account.

Alright, now your Cloudfront Instance 2 has proper permissions to the mp4 bucket.

Update: Setting up SSL for your Cloudfront instance

If you plan to serve your videos over SSL, you can provision a SSL certificate for your Cloudfront instance. To do this, log into AWS and go to the 'Certificate Manager' console. From there, you can request a SSL certificate. For our example, we would be requesting two SSL certificates, one each for

  • video-stream.bitesite.ca
  • video-file.bitesite.ca

The one catch is you'll have to either have admin e-mail access for your domain or have access to the DNS records for your domain. After you verify ownership of your domain, the certificate will be issued. At that point, you can go back to Cloudfront, click on your cloudfront instance, click 'edit', and select the newly issued SSL certificate.

Putting your videos into Code

With all that set up properly, you are now ready to put your videos into your code. Specifically, you'll be putting them into some HTML5 video tags.

There are a couple of ways to get your URLs. You can log into AWS and go to your Dynamo DB. From there, you can browse your items and you'll see your HLS URL.

But in general, you can also browse your S3 destination buckets, and you'll end up with URLs similar to this:

https://<cloudfront-id-for-instance-1>.cloudfront.net/<id-of-job-in-s3>/hls/<video-file-name>.m3u8 https://<cloudfront-id-for-instance-2>.cloudfront.net/<id-of-job-in-s3>/mp4/<video-file-name>_720p.mp4

Note that the .m3u8 and .mp4 files are served on different Cloudfront instances, so the subdomain will be different. Also notice that for the .mp4 file, you'll have to choose either the 1080p or 720p file to serve up.

Once you have those URLs, you can put them in your HTML:

<video width="100%" controls> <source src="https://<cloudfront-id-for-instance-1>.cloudfront.net/<id-of-job-in-s3>/hls/<video-file-name>.m3u8" /> <source src="https://<cloudfront-id-for-instance-2>.cloudfront.net/<id-of-job-in-s3>/mp4/<video-file-name>_720p.mp4" /> Your browser does not support HTML5 video. </video>

And with that, you have solved the first part of the problem: serving on-demand streaming video to your users.

Now, the question is, how do you protect it to only authenticated users.

Blocking Public Access

The first part is quite simple. You want to start off by blocking public access to your URLs. The AWS Answer by default makes your S3 buckets private, so you should be ok on that front. Users will not be able to directly paste a S3 URL in to their browser and watch a video.

However, the Cloudfront instances are setup with access to the S3 buckets, and they in turn serve up the files publicly. So while users can't access your S3 bucket files publicly, they can certainly access the resources through Cloudfront.

So our first step, is to update Cloudfront's behvaiour.

  1. Log in to AWS, go to Cloudfront, and take a look at your instances.
  2. Click on your first instance.
  3. Click on the Behaviors tab.
  4. You should see a row for the Default(*) path pattern. Check it and then click "Edit" above.
  5. Set "Restrict Viewer Access" to "Yes"
  6. Click "Yes, Edit"
  7. Repeat for your second Cloudfront instance.

Now, one thing to note. These changes don't take effect immediately. They take some time. Back on your Cloudfront main page where you see the listings of instances, you'll see a status column. If you've just made these changes, the status will probably be "In progress". You'll have to wait until this says "Deployed" before any of this works.

Once it's deployed, try pasting one of your video URLs into your browser. You should see something like this:

That's a good thing! Now users can't just copy and paste your URL and share it with other users.

You've successfully blocked public access. So how do you give access to authenticated users now?

Domain setup

So for the next part, you will unfortunately have to have access to your domain registrar or DNS servers. If you don't know that is, you'll basically need access to point your domains to certain servers. It's important that the website you are serving your videos on has the same domain as the cloudfront servers for this to all work.

This is how I have it setup:

DomainPoints to
www.bitesite.caMain website
video-stream.bitesite.caCloudfront Instance 1
video-file.bitesite.caCloudfront Instance 2

For Cloudfront, you'll have to setup CNAME records and you'll have to log into AWS, and configure your Cloudfront instances' alternate domain names. You can do this by clicking on the instance, and on the general tab, click "Edit". Again, you'll have to wait until the status is "Deployed" before all this starts working.

The domain setup here is absolutely crucial as we'll be using cookies. Cookies do heavily depend on the domains that you are visiting.

Obviously you'll adapt this for your own domain.

Local Testing

In the next step, we'll start talking about cookies. For those who don't know, cookies are basically an collection of key-value pairs that generally get sent with every request to the same host/domain. The interesting thing about cookies, is they are typically set by server code. A typical flow would be like this:

  1. A browser makes a request for a web page.
  2. The server receives the request, and sets a cookie.
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value.
  5. The browser sends the cookie data on every subsequent request to the server.
  6. The server uses the cookie data.

Now the interesting thing with cookies, is they are limited by domain. What's even more interesting, is that cookies can be set up to apply to any subdomain within the master domain. So if set up properly, the browser will not only send a cookie on every request back to the same server, the browser will also send that cookie to any other server that has the same domain. So it would look something like this:

  1. A browser requests a page from www.bitesite.ca.
  2. The server receives the request, and sets a cookie for the master domain (.bitesite.ca).
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value for (.bitesite.ca).
  5. The browser sends the cookie data on every subsequence request to any server ending with .bitesite.ca (which includes staging.bitesite.ca, video-stream.bitesite.ca, etc.)
  6. Any of those servers can use the cookie data.

You can see where this is going.

So this is all well and good if you're hosting your code on a server with the proper domains setup, but what about when you're still developing and you want to test on localhost? Well localhost is its own domain? So if you're testing with something like video-stream.bitesite.ca, how are you going to get your localhost to set a cookie for .bitesite.ca.

Ideally, you would want something like www.bitesite.ca to point to your localhost.

Well turns out there are a lot of different ways to do this, but the quickest way is to edit your "hosts" file.

Warning: Editing your hosts file will alter the way your system works when it resolves a URL. So be very careful when editing this and when you're done testing, maybe revert it.

Because I typically don't want to mess with real websites in my browser, and because it doesn't really matter what subdomain I use, rather than pointing www.bitesite.ca to my localhost, I chose to point dev.bitesite.ca to my localhost.

On a mac, you'll open up /etc/hosts and add this line to it:

127.0.0.1 dev.bitesite.ca

With that line in place, now when I accessed dev.bitesite.ca, it would hit my localhost and with any cookies I set, I had access to the .bitesite.ca domain.

Since port 80 usually is privileged, you can run on port 80 by doing something like this:

sudo rails s -p 80

If you're using RVM like me, you'll have to do something like this:

rvmsudo rails s -p 80

(one catch to this is, it is now running under the 'root' user, so make sure your database can accept connections from 'root'. I had to add this as another user to my PostgreSQL database).

So you should now be able to fire up your Rails server and access it at "dev.bitesite.ca".

Alright, we're all set to move on.

AWS Signed Cookies

So when restrict access to S3 or Cloudfront files, Amazon provides you with two mechanisms to give temporary access to those files:

  • Signed URLs
  • Signed Cookies

Signed URLs can actually be applied to both S3 URLs and Cloudfront URLs, but for our example, we're only dealing with Cloudfront URLs. A Signed URL is basically a URL that you can provide the user that gives them temporary access to a resource. The way it works, is you write server side code to generate a URL that contains query string parameters specifying how long that URL is valid. When that URL hits the Amazon AWS servers, the Amazon servers check the URL's parameters, to see if the URL is valid. It checks if it's expired, and also checks a signature to ensure that the URL was created by a authorized party. The way this usually works, is your write server side code that has access to AWS private keys to create these special signed URLs. Amazon even provides libraries for Ruby to do this.

Signed Cookies are very similar (and as far as I know only apply to Cloudfront URLs). The idea of a signed cookie, is you create a cookie that contains a policy. That policy will specify what types of files the cookie applies to. Then when browser requests a URL from the Amazon servers, the Amazon servers will look at the cookie that comes along with the request (remember, cookies are sent automatically with every request to the same master domain), and take a look at the policy. If the policy allows the URL that the browser is requesting, then Amazon will send back the resource successfully. For security reasons, the Amazon servers will also check that the cookie was created by an authorized party. In this case, this usually works by having your server code create a cookie using Cloudfront private keys. Again, Amazon provides Ruby libraries to do this.

The big advantage with SIgned Cookies, is you specify a policy that can encompass more than 1 file. So it's an easy way to give access to an entire set of files. This is particularly important when it comes to streaming files because when you stream a video, you're actually requesting access to multiple files (10 second chunks for example). So rather than creating a Signed URL for every one of those chunks, you can create a cookie that grants access to all those files.

So for this solution, we'll set up signed cookies for users that are authenticated. But to create these Signed Cookies, our server side code has to be authorized to do so. How do we authorize our server code to create cookies? We use Cloudfront private keys.

Cloudfront Key Pairs

If anybody could randomly create a signed cookie, it wouldn't really be protected. In fact, the "signed" part is what makes is protected. Only people authorized to create cookies can create the cookies that will work when the Amazon servers do their check. To make your Rails code authorized, they will need access to Cloudfront keys. To do this:

  1. Log in to Amazon AWS
  2. Click on your username in the upper-right and select "My Security Credentials".
  3. Ignore the warning about IAM by clicking "Continue to Security Credentials" as Cloudfront keys only work at the User Account level.
  4. Expand the "Cloudfront key pairs" section.
  5. Click on "Create New Key Pair".
  6. The pair will be created and present you with options.
  7. Download the PRIVATE key file.
  8. Then click "close".
  9. You'll be brought back to your list of Keys. You should also see the "ACCESS KEY ID". Keep this window open as you'll need that value.

We then put the private key file into our source, but be warned that this file should not be accessible to the public. So if you're hosting your source code in a public repository, you'll want to find somewhere else to put this file. Because our source code is private, we put the private key in /railsapproot/cloudfront.

Creating a signed cookie in Rails

Ok, so we have our private key and access key ID ready to use so we can properly create signed cookies. Let's put these to use.

AWS SDK Gem

First, grab the 'aws-sdk' gem. I used version 3 of the SDK. In your Gemfile:

gem 'aws-sdk', '~> 3'

Initializer

Second, let's set up a global Cookie signer to use in our app. Create an initializer config/initializers/aws.rb and put this code in it:

CF_COOKIE_SIGNER = Aws::CloudFront::CookieSigner.new(key_pair_id: "DSDIEJWJRIOWEJRWEOJR", private_key_path: "./cloudfront/pk-DSDIEJWJRIOWEJRWEOJR.pem")

You'll fill in your key_pair_id with the ACCESS KEY ID from the previous step. For the private_key_path, type the path to where you saved the private key file. The Access Key Id might work better as an environment variable as well. So you might have something more like:

CF_COOKIE_SIGNER = Aws::CloudFront::CookieSigner.new(key_pair_id: ENV['AWS_CLOUDFRONT_KEY_PAIR_ID'], private_key_path: "./cloudfront/pk-DSDIEJWJRIOWEJRWEOJR.pem")

before_action to create the cookie

So, the next question is, when do you want to actually create the cookie? My first approach was that after the user signed in, right there and then create the cookie. That seemed smart. The thing is, if they signed in and left their browser for a long time, the cookie might expire and then they'd have to sign out and sign back in. You could manage this by signing them out, but I decided that was too complicated for my use case. You can definitely do it that way, but here's what I decided to do.

I decided to write a before_action for all actions that checks if the user is signed in. Then, if the user is signed in, I set the cookie. This way, every request they perform while they're signed in just ends up renewing the cookie. The only catch to this is ensuring you clean up the cookie when they sign out.

So here's what my application controller looked like:

class ApplicationController < ActionController::Base before_action :set_cloudfront_signed_cookie ... private def set_cloudfront_signed_cookie if user_signed_in? cookies_values = CF_COOKIE_SIGNER.signed_cookie("http://dummyurl.com/", policy: policy) cookies_values.each do |k, v| cookies[k] = { value: v, expires: 10.minutes.from_now, domain: :all } end end end ... end

So this runs before every action. If the user is signed in, we create a signed cookie using the CF_COOKIE_SIGNED from the AWS SDK. That will spit back a hash of values that we have to write to the clients cookies. For each cookie value, we set it to expire after 10 minutes, and we also specify the very important domain: :all. What that argument does is sets the cookie for ".bitesite.ca" rather than "dev.bitesite.ca". Once you do that, those cookie values will also be sent with requests made to "video-stream.bitesite.ca" and "video-file.bitesite.ca".

Let's take a closer look at the initial call to CF_COOKIE_SIGNER.signed_cookie.

First of all, you'll see I've passed in a dummyurl.com to the method. This is not for demonstration purposes, and not a mistake. This is literally the code I use and I'll tell you why. If you're passing in a custom made policy to this method, the URL parameter of this method doesn't matter at all. So I purposely put "dummyurl.com" to let other developers know that this has nothing to do with this all working.

Now, what I just said is that that URL is ignored if you pass in a custom policy. So that's what the second argument is policy. Let's take a look at that method below which is also in the application controller as a private method:

class ApplicationController < ActionController::Base ... private def policy resource = "http*://video*.bitesite.ca/*" expiry = 10.minutes.from_now { "Statement" => { { "Resource" => resource, "Condition" => { "DateLessThan" => { "AWS:EpochTime" => expiry.utc.to_i } } } } }.to_json.gsub(/\s+/, '') end end

This is the policy that is included in the Cookie that the Amazon servers will check when the browser makes a request. The expiry specifies how long the cookie is valid for. Remember, we call this every time a signed in user takes action, so this will get renewed every time they browse to a page. What more important here is the way that the resource string is constructed. Amazon allows you to put wildcards in the resource URL. This is the key to the policy working for multiple files (and multiple servers for that matter).

Let's break down the three wildcards. First you have

http*://

This is optional, but basically allows secure and non-secure requests. That is, it will allow the browser to request "http://" and "https://".

Secondly, we have the host:

video*.bitesite.ca

What's nice about this, is this will allow the cookie to work for both our streaming Cloudfront instance and our single-file Cloudfront instance. That is, it will work for both "video-stream.bitesite.ca" and "video-file.bitesite.ca".

And lastly we have the path:

/*

That allows the cookie to apply to basically any file hosted on those servers.

Alright that's it. Put that into your code, sign in and browse to a page. Your cookies should now be set. It's really easy to see these in Chrome. Just open your inspector tools and go to the Application tab. Open up your cookies and you should see cookies for your dev.bitesite.ca domain.

You'll see that the domain on the Cloudfront cookies are ".bitesite.ca".

The HTML Code

With your cookies in place ready to be sent with your video-stream and video-file requests, you're ready to cap it all off. Code a page, and put this in:

<video> <source src="http://video-stream.bitesite.ca/96f543c-2882-52b3-2b3e-42a35c5b184/hls/intro.m3u8" /> <source src="http://video-file.bitesite.ca/96f543c-2882-52b3-2b3e-42a35c5b184/mp4/intro_720p.mp4" /> Your browser does not support HTML5 video. </video>

Feel free to add thumbnails generated by AWS and controls:

<video width="100%" controls poster="http://video-stream.bitesite.ca/96f543c-2882-52b3-2b3e-42a35c5b184/thumbnails/intro_abr_tumb.jpg"> <source src="http://video-stream.bitesite.ca/96f543c-2882-52b3-2b3e-42a35c5b184/hls/intro.m3u8" /> <source src="http://video-file.bitesite.ca/96f543c-2882-52b3-2b3e-42a35c5b184/mp4/intro_720p.mp4" /> Your browser does not support HTML5 video. </video>

And that's pretty much it! Everything should be working.

If you want to make sure it's secure. Grab the mp4 URL, log out of your app, wait 10 minutes for the cookie to expire, and then paste that URL into your browser, you should get an error.

Finishing it all off, cleaning up your cookies

Now, because of my decision to renew the cookie on every request, it's a good idea to kill the cookie right after a user logs out. So wherever you log out, for me I use Devise and override the SessionsController#destroy and do this:

class SessionsController < Devise::SessionsController def destroy clear_cloudfront_cookies super end private def clear_cloudfront_cookies cookies.delete("CloudFront-Key-Pair-Id", domain: :all) cookies.delete("CloudFront-Policy", domain: :all) cookies.delete("CloudFront-Signature", domain: :all) end end

It's VERY important that you specify domain: :all, because that's how you set up the cookies. Otherwise, it won't delete properly.

Conclusion

With that, you now have a great video solution! Congrats. This took me 3-4 solid days of debugging to get through so hopefully this helps us some peeps. The great thing about the AWS Answer, is that it's nice infrastructure for uploading, transcoding and serving the files. So you can in the future build an interface for users to upload files. Once they're uploaded to the S3 source bucket, they will automatically get transcoded and then you can inspect the Dynamo DB programmatically to serve them up.

Our project didn't require that level of sophistication, but it's good to know we have it in our back pocket if we need it. With Cloudfront and streaming files, you know you're serving your users fast and with minimal data to view the video.

Always room for improvement, so be sure to let us know if you have anything to add to this. (at the time or writing this Blog we don't have comments implemented, but they will be coming soon).

Thanks for reading.

Caseyli
Casey Li
CEO & Founder, BiteSite