Ruby on Rails QuickTip: Adding Parameters to your Methods Safely

ruby software ruby on rails

This is going to be a quick post and applies to a lot of different languages, but it's something we've been doing in our Rails projects a decent amount.

If you're ever working on a larger codebase and you decide you want to add a parameter to a method, but are afraid to do so because it might break code elsewhere, consider simply adding a default value.

Take the following setup for example

class MyModel < ApplicationRecord
  def my_method(user_id)
    user = User.find user_id

Now, let's say you want to modify this method so you can add another parameter. Here's a safe way you can do it:

class MyModel < ApplicationRecord
  def my_method(user_id, new_parameter=nil)
    user = User.find user_id
    if new_parameter
      do something

Now that new parameter's default value could have been anything, but the key here:

  1. Have a default value so that any code calling this method with the original parameters will still work
  2. When the new parameter is not passed in, have the method behave the exact same way it did before

With these two principles you'll be able to extend your code without breaking any existing code.

Casey Li
CEO & Founder, BiteSite

Ruby on Rails accepts_nested_attributes_for is deleting my associated record!

software ruby on rails

So I've been programming Ruby on Rails for about 9 years now and I'm still learning new things everyday. This is one that definitely caused us some issues.


So here's the setup. We have a User model

class User < ApplicationRecord
  has_one :profile, dependent: :destroy
  accepts_nested_attributes_for :profile

And we have an associated Profile model

class Profile < ApplicationRecord
  belongs_to :user

Notice that on the User model, we have

accepts_nested_attributes_for :profile

If you don't know what that does, it basically allows you to run creates and updates on the User model and Profile model in one single call. So for example, you can do this:

user = User.first
user.update({ email: '', profile_attributes: { first_name: 'Casey', last_name: 'Li' })

If you set up your form correctly and with proper strong params, you can put the User and Profile all in one single form for the user making it easy to update both models at the same time. BUT BE VERY CAREFUL!


If you have a very similar setup to what we had, basically a one-to-one relationship, then you have to be very careful of your update calls. If you call update on the parent record (in our case the User record), and pass in child attributes, if you DON'T pass in the ID, it will actually delete your existing child record and create a new one!. Yah. I didn't know that either. Try it out, and look at your Rails logs, you'll see a SQL delete statement followed by an insert.

Again, this is for one-to-one relationships only while doing accepts_nested_attributes_for updates.


If this is the behaviour you want, obviously you're ok. But for us, we wanted to update the existing child record rather than destroying what already existed.

If you want to update the existing record, there are two things you can do.

Solution 1: Pass in the ID of the child record

While I haven't tried this myself, if you pass in the child record's ID, it should perform an update rather than a delete/insert.

user = User.first
profile = user.profile
user.update({ email: '', profile_attributes: { id:, first_name: 'Casey', last_name: 'Li' })

Solution 2: Use the 'update_only' option

This is the solution we went with. When declaring your accepts_nested_attributes_for, you can pass in the update_only option:

class User < ApplicationRecord
  has_one :profile, dependent: :destroy
  accepts_nested_attributes_for :profile, update_only: true

Learn something everyday. Hope this helps out some peeps. Thanks for reading!

Casey Li
CEO & Founder, BiteSite

Fixing Rails + Carrierwave + Amazon S3 403 Forbidden Error

amazon s3 carrierwave coding software ruby on rails

So we've been using CarrierWave for a long time now for our Ruby on Rails projects. Even when we converted to direct upload, we were still using CarrierWave for image processing. It has stood the test of time and I've heard from other developers that it offers more flexibility than ActiveStorage in its current state.

The Problem

We were implementing a very basic CarrierWave solution over the past while to upload files and attach them to an active record. The requirement was that the files that were uploaded needed to be publicly visible (so default CarrierWave behaviour). It's was very standard, basic CarrierWave:

  • Create an Amazon AWS Account
  • Create an IAM User that has full access to S3
  • Create a Rails initializer to use the IAM User Keys and Fog/AWS
  • Create a CarrierWave Uploader and attach it to the ActiveRecord model
  • Add the appropriate fields to the form
  • Get the user to upload files!

All good! This is what we've been doing for years and things have been working great.

But during this implementation, we got the following error:

Excon::Error::Forbidden (Expected(200) <=> Actual(403 Forbidden)
  :body          => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>(obfuscated)</RequestId><HostId>(obfuscated)</HostId></Error>"
  :cookies       => [
  :headers       => {
    "Connection"       => "close"
    "Content-Type"     => "application/xml"
    "Date"             => "Thu, 14 Mar 2019 12:34:03 GMT"
    "Server"           => "AmazonS3"
    "x-amz-id-2"       => "(obfuscated)"
    "x-amz-request-id" => "(obfuscated)"
  :host          => "(obfuscated)"
  :local_address => "(obfuscated)"
  :local_port    => 49582
  :path          => "(obfuscated)"
  :port          => 443
  :reason_phrase => "Forbidden"
  :remote_ip     => "(obfuscated)"
  :status        => 403
  :status_line   => "HTTP/1.1 403 Forbidden\r\n"

What was going on?

Realization of the issue

So it turns out the issue has to do with the fact that what we needed and what Carrierwave by default is set up to do does not play nicely with Amazon's current default S3 settings. We needed our app to upload files and make them publicly visible. However, when you create a new S3 bucket - it is not by default set up to allow that.


So how to you fix it? You want to go to your bucket and click on the "Permissions" tab. From there you want to make sure that both "Block new public ACLs and uploading public objects" and "Remove public access granted through public ACLs" are set to false. There is a little "edit" link in the upper right to allow you to set this. You can even read the little blurb about these settings if you hover on the "i" icon when you go to edit and you'll see that these control, you'll see that these affect objects uploaded with public ACL settings.

So set those to false and that should resolve the issue.

Be careful

Now one thing I should mention is that you might want to be careful when it comes to your app and your specific security requirements. Our app was all about posting public files. If you're in a situation where you need more fine grained privacy control or are dealing with sensitive files, you might want to look into creating custom policies and custom users. The solution I've detailed here is a barebones, public file upload solution. It's also a good way to just get things going if you're in a prototyping phase or in that phase where you just needs to get things working.

Hopefully this helped some of you out.

Thanks for reading.

Casey Li
CEO & Founder, BiteSite

How much does custom software cost?

buiness software

If you’ve been reading our blog lately, chances are you’re interested in custom software. Recently we’ve written articles about what custom software is, the pros and cons of custom software, and how to get started with an MVP.

This may have peaked your interest a bit and it may have you considering your own project and which direction to go. In the early planning phases you might start to weigh out your options between the status quo, using a pre-existing application, or building some custom software.

There is a lot to consider when making your decision, but inevitably, one of the biggest criteria that will factor into your decision is cost.

Now, we briefly talked about cost in our Pros and Cons of Custom Software article and how custom software can be expensive. The question is, though, how expensive?

Invariably, when writing articles about cost you have to be careful. There are so many variables that factor into a given company and how much they charge and there are no blanket statements that apply to all companies. It’s probably a reason why most companies don’t talk about the subject publicly.

The reality is, for the same software output, you may pay someone $200.00 and you may pay someone else $20,000.00. The discrepancy can be that big. But you have to consider that for the same software output, you might not exactly be getting the same service. Sure you may end up with the same basic application, but what’s different? Does the person you’re paying have years of experience behind them? Do they have a team that will help you in case things go horribly wrong? Do they have processes that make the development cycles more efficient and less stressful?

These questions just scratch the surface of what can differ from vendor to vendor. But besides your project, you have to consider what the vendor has to account for. If the vendor is an individual working out of their home, they will obviously have way less to pay for than a team of 20 working out of an office. Any business owner will tell you how high costs can get.

Another factor is simply the types of clients that the vendor deals with. Some vendors deal with huge organizations that spend hundreds of thousands if not millions of dollars on custom software projects. These vendors will naturally price themselves for those customers. In some cases, it’s due to the fact that these types of customers have way higher demands, but in other cases, it’s simply what the client is used to paying.

So when thinking about the cost of custom software, you have to do a bit of thinking about anything you hear or read and understand that there is a LOT that goes into the number that someone gives you.

I can’t speak for other vendors and how they do things, but I can write about how we runs things at BiteSite. I would assume that most vendors follow something similar, but again, I’m only speaking from our experience.

The problem with Fixed-Price Contracts

When BiteSite started, our contracts and projects fit into two categories: fixed-price and hourly billing. For the Fixed Price contracts, here’s what our pricing process would look like. We would sit down with the client, discuss the project, sometimes breakdown the project into milestones, and agree upon a feature set. We would then take that feature set and estimate how many hours it would take, multiply those hours by our standard rate, and draw up an contract based on that number.

Because we had seen other companies do this, we thought this was the way to go. But after a few projects that went way over budget and after reading Thoughtbot’s Open Source Playbook - we had the confidence to say “This is not working out.”. (in fact, Thoughtbot taught me the phrase “Fixed-Price Bid”).

The problem with software development is that it’s usually best developed in an agile manner. That is, with shifting priorities, MVP philosophies, and incremental changes based on continuous feedback. When you put all those principles in place, software development is generally very hard to predict and map out exactly over a long period of time.

So if you come up with a contract for a 2-3 month project, there is an incredibly high probability that things are going to come up that you didn’t envision. In fact, if you’re a good agile developer, you welcome the unforseen feedback that causes you to shift course.

With that in mind, we abandoned fixed price contracts for all of our custom software projects and moved purely to a hourly based billing system.

Making our clients feel comfortable

So going to a hourly based billing system was great for us. Not only were we not going over budget, but we didn’t stress anymore about beating the clock. We did our work at a comfortable pace which in a lot ways allowed us to do better work.

So that’s all well and good for us, but what about the client? When a client approaches you about creating something for them, it’s hard for them to hear “We’re just going to build it, and charge you at the end of the month.”

With fixed-price contracts, they knew exactly how much they were going to spend. With hourly based billing, they were left in the dark and just had to trust us.

So to combat this, we married our hourly-based billing with a couple of other concepts:

  • Rough Estimate: While we don’t do fixed price contracts, and we don’t stick to a specific number, based on what the customer is asking, we do give a rough estimate. For example, we’ll tell them that what they are initially asking for will take roughly 30 hours and multiply that by our hourly rate.
  • Re-emphasize that the rough estimate is purely an estimate: Despite giving them an estimate, we still emphasize that software is unpredictable and that a lot of unforeseen development may come up.
  • Transparency along the way: If money is a big concern, we let our clients know along the way how many hours we’re spending. We informally agree that if we’re coming close to the rough estimate, we can sit down and chat to make some decisions.

With these three things in place, it helps the client agree to work with us.

The last thing though, is that there has to be a trust between us and the client. In our first meetings, based on our interactions, based on our previous work, based on our reputation, there grows a trust that we will do our best work and not overcharge you. Once that trust is established - both parties stop watching the clock and dollars and start focusing on the project.

This trust between client and vendor is something that should exist with any company you choose. Thoughtbot has a great article on this here.

So, how do we price?

With all that in mind, our pricing becomes pretty easy. For starter projects, we figure out a rough scope of features, we do a rough estimate of hours, and give the rough number to the client understanding that it can fluctuate.

After the project has some legs, usually it moves to on-going work. Our clients ask us for work to be done, we do the work, and bill at the end of the month for our time. If the client is concerned about cost at that point, we do an informal estimate of hours as well.

Stop skirting - what are the actual numbers?

Like I said, vendors vary heavily in cost, and even in our lifetime, BiteSite has changed its rates several times and will probably change them again in the future. WIth that said, as of the time of this writing, our standard rate for software development is $150.00 CAD + HST per hour. That is a standard hourly rate that covers all of our services. We’re a small team so our staff are jack-of-all-trades that cover everything from Product Management, to Design, to Coding, to Deployment - basically everything you need to get a software project off the ground.

When we start a project with a client, we usually like to scale down their big vision to an Minimum Viable Product or MVP. Because software can be very unpredictable to develop, it’s good to come up with a small product that can generate valuable feedback that we can build upon.

Our best MVPs have been in the neighbourhood of 30-50 hours of work. So a good starter project would cost anywhere between $4500.00 CAD and $7500.00 CAD + HST.

So as a rough start, we usually tell our clients that $5000.00 + HST is a good starting budget.

So what can a $5000.00 MVP get me?

So what does 30-50 hours of work look like? At BiteSite, we focus on Web Applications so let’s talk about those. Client features have a big range, but here's an example project to give you an idea:

  • A web application
    • that allows my staff to login and manage their account and profile
    • that automatically calculates total vacation days an employee is entitled to based on their start date
    • that allows my staff to request their vacation days
    • that allows supervisors to approve vacation days
    • that allows supervisors to customize how many vacation days each employee is entitled to
    • that e-mails supervisors anytime someone has logged a vacation day
    • that e-mails staff anytime their vacation is approved
    • that summarizes total vacation days in a report

This is a very high-level description, but it’s a good example of a good MVP. If a client came to me with that description for a project, I would say that’s a great starter project that would probably cost around $4000.00 to $5000.00 + HST.

On the subject of the MVP, we try as much as possible to get our clients projects down to something in the range of 30 - 50 hours because we feel it’s a good spot when it comes to foreseeable development. Anything past that, we feel it’s better to develop something small now and see what happens later rather than plan out every single detail.

What happens after the MVP?

After an MVP is launched, it’s really anyone’s guess how much more you will spend. Depending on many factors including how much you're dedicated to the project, how much the software gets used, and how "on the mark" the original features were, your software could demand a lot more future work or very little.

If the software is very successful and you want to keep adding more and more to it, it’ll cost you more. Chances are though, it will also help you more and potentially generate revenue for you. On the other end of the spectrum, you may find that the MVP is perfectly fine and just needs a few tweaks every now and then.

We’ve had projects that have become very successful and demand full time work where over hundreds of thousands of dollars of development are spent every year, and we’ve had small projects that cost under $100.00/year of maintenance. After your MVP though, you start to get an idea of how much effort it takes to add to your product and if you don't - that should always be an open conversation with your vendor.

So when considering custom software, it’s a good idea to think about your initial MVP cost, and then the potential to fund it afterwards. While our clients may have way bigger budgets, we still encourage them to start with the MVP and go from there.

The fine print

This article is called ‘What does custom software cost?’, and while the majority of your cost will be spent on labour, I would be remiss to leave out the extra costs that a client is typically responsible for. When it comes to developing software, there are usually a lot of services involved that you’ll pay for. For example, if you want to develop an iOS app for the iPhone, you’ll have to pay $99/year to have it on the App Store. If you want to develop a web application, you’ll have to pay for the domain and hosting costs. So when considering your budget, don’t forget to discuss with your vendor any extra costs on top of the service labour they’re providing.


Like I said, I can only speak to our own company which is basically a $150.00/hr rate. Other companies will have lower rates and others will have higher. Some will do fixed price - but the fixed price probably factors in some hourly estimate of the project.

So if the price can vary so much - what’s the point of this article? First of all, I wanted to educate the market on where the price is coming from. Second of all, we at BiteSite want to be transparent about our pricing. It helps with our own projects and helps push others to be transparent.

Not to mention, I hate when I look around the web and can’t get a single answer to a question I have. If your question is “How much does custom software cost?”, well now you have a starting point.

Casey Li
CEO & Founder, BiteSite

Serving Videos to Authenticated Users using Amazon AWS and Ruby on Rails

amazon aws video coding software ruby on rails

Update: The original posting of this article left out two key points that have to do with serving your videos over SSL and allowing your second Cloudfront instance to access your S3 MP4 bucket. Scroll down to the section on 'Secondary Cloudfront instance' for the details.

So we recently got a project to do something we have never done before: create a web application that would only serve videos to authenticated users.

It's a pretty common use case we run into a lot: if you're logged in as someone who is authorized to view the videos, you can click on a page and watch that video. If you're not, then you can't view that page, NOR can you copy the link to that video to watch that video. Now as common as this is to use, it isn't super common to develop. At least not for us.

Yes, you can use streaming services like Vimeo etc., but what if your client wants their own custom solution. This is the challenge we were faced with.

And it turns out, there were really two things we had to solve:

  • Serving streaming video on-demand
  • Securing the streaming video to only authenticated users

(Just a small note, usually video-streaming breaks down into two categories: live or on-demand. Live streaming is when you are shooting video and streaming it to users' devices at the same time. On-demand is like traditional YouTube, Vimeo, or Netflix - where your users' are watching pre-recorded video. For our project, we were implementing on-demand video).

Ruby on Rails is our development web framework of choice, and AWS S3 is our asset storage of choice. So we knew the solution involved some combination of those two.

Now, the thing is - a good chunk of this is documented around the web, but I didn't come across a post that showed the end-to-end solution for Rails. So here you go.

Amazon Web Services

If you don't already know, Amazon has this entire part of its business called Amazon Web Services separate from its consumer facing online shopping platform. Amazon Web Services is a collection of services that Amazon provides to developers to help them develop applications. They have everything from virtual servers, to databases, to media encoding, to storage systems. Each one of these has their own name. For example, the storage solution is called "S3". You can do some more reading on Amazon Web Services on their official home page.

AWS Answers

When I started this project, I knew nothing about streaming video to devices. I originally thought the easiest thing to do would be to upload a video to Amazon S3, and just have a link to that video in the HTML code. As I started to research hosting video on AWS though, it turns out that is not a good solution. With that solution, you force every user to download the entire video, scrubbing back and forth is not ideal, and it's not true streaming in the sense that it's not downloading small packets of the video.

The preferred solution involves encoding your video into streamable chunks and serving those to the customer and only if their browser doesn't support streaming do you serve them the entire file. This solution also helps with dynamically changing the quality depending on the user's connection speed. So the first thing to consider is encoding your video into multiple formats that browsers support to optimize the video viewing experience.

On top of that, Amazon recommends that you consider using their CDN service, Cloudfront, to serve your assets to your users. What a CDN does is effectively copy your resources to multiple servers so that when a browser accesses your resource, it grabs it from a server that geographically closer to it. This ensure fast response and load balancing between all your different users. So the second thing to consider is setting up Cloudfront to serve your videos.

To do all this, there are actually a lot of moving parts and a lot of complexity involved. The great thing is, Amazon actually does supply all the services needed to execute this, but the question is how do you set it all up?

Well that's where AWS Answers comes in.

AWS Answers is a collection of solutions to common problems. So let's say you wanted to build a "Internet of Things" solution. AWS Answers has a solution for that to setup everything you need to get up and running. Let's say you wanted to create a backend server for a mobile app - well there is an AWS answer for that as well.

AWS Answers comes with documentation such as guides and FAQs about how to setup everything you need. For example, it may tell you, "You should set up a AWS Dynamo DB and a AWS S3 bucket...". But the coolest thing about AWS Answers, is the solutions also come with automatic deployment scripts. This means that you can click a button, fill out a couple of fields, and then boom - AWS automatically sets up everything you need. It's pretty amazing.

And guess what? There is an AWS Answer for On-demand Video Streaming.

Video On-Demand on AWS

So this article won't explain all the details of the "Video On-Demand on AWS" AWS Answer, but I will break down the basics of all the moving parts. When you deploy this AWS Answer, here are some of the major parts that get setup for you:

  • S3 Buckets (both for the original video files, and the transcoded files)
  • Dynamo DB (a database to keep track of your video files)
  • MediaConvert (to trancode your actual videos)
  • CloudFront (to serve your files to your users)

The AWS Answer actually sets up Lambda functions and Step functions as well, but I want to concentrate on the major parts in this article. You can read about everything else on the AWS Answer Page.

The basic workflow is this.

  1. You upload a video file into one of the S3 buckets that the AWS Answer set up for you (the source bucket).
  2. The bucket is setup to automatically run a transcode job on any video files in that bucket.
  3. The transcode job starts to transcode your video file into appropriate streaming formats.
  4. The transcode job drops its completed files (transcoded video files and thumbnails) into another S3 bucket (the destination bucket) that the AWS Answer set up for you.
  5. The newly transcoded video files are now available to the Cloudfront instance that the AWS Answer set up for you.
  6. You put the Cloudfront URL to your video into your code.

That is the basic setup. So once the AWS Answer is setup, you literally just drop files into the source S3 bucket, AWS does the rest, and provides you with a URL for your video that you can put into your code.

All is good? We're done right?

Not quite.

Tweaks to "Video On-Demand on AWS"

So the Amazon AWS answer is great, but it's not exactly perfect for everybody, and it definitely was not perfect for us. As we went down the road of putting the Cloudfront links on our code, we ran into a lot of issues and it turns out the solution was to tweak some of the services and configuration that the AWS Answer setup for us.

Here are the two major changes we made.

H.264 Encoding

So by default, the AWS Answer setup up a couple of encodings for the videos. When you drop a video into the source S3 bucket, AWS transcodes your file into multiple formats. The first set of these formats are all streaming formats, and then it also transcodes your video into a "single-file" format for browsers that don't support the streaming formats.

For streaming, the AWS Answer sets up encodings for HLS and DASH of various resolutions. For the "single-file" format, the AWS Answers sets up encodings for H.265 HEVC of various resolutions. If you're curious, you can actually go into your "Media Convert" page, and click on "Output Presets" to see this list:

Keep in mind, that if browsers support the streaming formats, they don't care about the single-file format. It's only the browsers that don't support the streaming formats that care about the single-file format.

The streaming formats are actually great and work with browsers like Safari. The problem is with the "single-file" format. Most browsers that don't support streaming formats, like Chrome, don't support HEVC H.265 either. So our backup single-file format wouldn't work.

So the first change we made to the default solution was change the MP4 output presets.

We changed the Video codec to "MPEG-4 AVC (H.264)", left everything else as default, and filled in the bitrate to be the same as before. "8500000" for 1080p and "6500000" for 720p. We also updated the name of the preset output so that it said "AVC" instead of "HEVC".

Now files that were dropped into the source S3 bucket would get converted to the HLS and DASH streaming formats as well as a H.264 single-file format.

Secondary Cloudfront Instance

Out of the box, the AWS Answer sets up one Cloudfront instance to point to the S3 destination bucket. To be more precise, it's setup to point to the S3 bucket used for the streaming video files output from the transcoding job. Your single-file H.264 files actually get put into an entirely different S3 bucket.

Since we wanted to serve both, we had to set up one more Cloudfront instance that pointed to the S3 MP4 bucket.

So our set up in the end

  1. Cloudfront Instance 1 pointed to S3 Bucket for Streaming Files (HLS, DASH). The name of this bucket has "abrdestination" in its name, ABR standing for "adaptive bitrate".
  2. Cloudfront Instance 2 pointed to S3 Bucket for Single-file Files (H.264). The name of this bucket has "mp4destination" in its name.

Update: Setting up Access from your Secondary Cloudfront Instance

When the AWS Answer setup your 'mp4destination' bucket, it by default blocks access. So you need to allow access from your Cloudfront Instance 2 to your 'mp4desination' bucket. By default, the AWS Answer already sets this up between Cloudfront Instance 1 and the 'abrdestination' bucket. To set up this access, we actually need what's called a 'Origin access identity'. Luckily, we can just use the one that was already set up between the Cloudfront Instane 1 and the 'abrdestination'. If you log into AWS, go to your Cloudfront console, you'll see on the left 'Origin Access Identity'. If you click on it, you'll see the 'VOD on AWS' user that was set up to allow access between the Cloudfront Instance 1 and the 'abrdestination'. Again, we are going to re-use this for Cloudfront Instance 2.

To do this, click on your Cloudfront Instance 2 and then click on the 'Origin' tab. Check the mp4 origin, and then click the 'Edit' button. Under 'Origin Access Identity', choose 'Use Existing' and select the 'VOD on AWS' user. You'll also want to select 'Yes, Update Bucket Policy' for 'Grant read permissions on Bucket'. Click 'Yes, Edit'. Saving changes like this usually takes a while to deploy so monitor the main Cloudfront console to see when the changes have taken effect.

You can also check out the Bucket Policy of the mp4 bucket to make sure that Amazon correctly added permissions for the 'VOD on AWS' account.

Alright, now your Cloudfront Instance 2 has proper permissions to the mp4 bucket.

Update: Setting up SSL for your Cloudfront instance

If you plan to serve your videos over SSL, you can provision a SSL certificate for your Cloudfront instance. To do this, log into AWS and go to the 'Certificate Manager' console. From there, you can request a SSL certificate. For our example, we would be requesting two SSL certificates, one each for


The one catch is you'll have to either have admin e-mail access for your domain or have access to the DNS records for your domain. After you verify ownership of your domain, the certificate will be issued. At that point, you can go back to Cloudfront, click on your cloudfront instance, click 'edit', and select the newly issued SSL certificate.

Putting your videos into Code

With all that set up properly, you are now ready to put your videos into your code. Specifically, you'll be putting them into some HTML5 video tags.

There are a couple of ways to get your URLs. You can log into AWS and go to your Dynamo DB. From there, you can browse your items and you'll see your HLS URL.

But in general, you can also browse your S3 destination buckets, and you'll end up with URLs similar to this:


Note that the .m3u8 and .mp4 files are served on different Cloudfront instances, so the subdomain will be different. Also notice that for the .mp4 file, you'll have to choose either the 1080p or 720p file to serve up.

Once you have those URLs, you can put them in your HTML:

<video width="100%" controls> <source src="https://<cloudfront-id-for-instance-1><id-of-job-in-s3>/hls/<video-file-name>.m3u8" /> <source src="https://<cloudfront-id-for-instance-2><id-of-job-in-s3>/mp4/<video-file-name>_720p.mp4" /> Your browser does not support HTML5 video. </video>

And with that, you have solved the first part of the problem: serving on-demand streaming video to your users.

Now, the question is, how do you protect it to only authenticated users.

Blocking Public Access

The first part is quite simple. You want to start off by blocking public access to your URLs. The AWS Answer by default makes your S3 buckets private, so you should be ok on that front. Users will not be able to directly paste a S3 URL in to their browser and watch a video.

However, the Cloudfront instances are setup with access to the S3 buckets, and they in turn serve up the files publicly. So while users can't access your S3 bucket files publicly, they can certainly access the resources through Cloudfront.

So our first step, is to update Cloudfront's behvaiour.

  1. Log in to AWS, go to Cloudfront, and take a look at your instances.
  2. Click on your first instance.
  3. Click on the Behaviors tab.
  4. You should see a row for the Default(*) path pattern. Check it and then click "Edit" above.
  5. Set "Restrict Viewer Access" to "Yes"
  6. Click "Yes, Edit"
  7. Repeat for your second Cloudfront instance.

Now, one thing to note. These changes don't take effect immediately. They take some time. Back on your Cloudfront main page where you see the listings of instances, you'll see a status column. If you've just made these changes, the status will probably be "In progress". You'll have to wait until this says "Deployed" before any of this works.

Once it's deployed, try pasting one of your video URLs into your browser. You should see something like this:

That's a good thing! Now users can't just copy and paste your URL and share it with other users.

You've successfully blocked public access. So how do you give access to authenticated users now?

Domain setup

So for the next part, you will unfortunately have to have access to your domain registrar or DNS servers. If you don't know that is, you'll basically need access to point your domains to certain servers. It's important that the website you are serving your videos on has the same domain as the cloudfront servers for this to all work.

This is how I have it setup:

DomainPoints to
www.bitesite.caMain website
video-stream.bitesite.caCloudfront Instance 1
video-file.bitesite.caCloudfront Instance 2

For Cloudfront, you'll have to setup CNAME records and you'll have to log into AWS, and configure your Cloudfront instances' alternate domain names. You can do this by clicking on the instance, and on the general tab, click "Edit". Again, you'll have to wait until the status is "Deployed" before all this starts working.

The domain setup here is absolutely crucial as we'll be using cookies. Cookies do heavily depend on the domains that you are visiting.

Obviously you'll adapt this for your own domain.

Local Testing

In the next step, we'll start talking about cookies. For those who don't know, cookies are basically an collection of key-value pairs that generally get sent with every request to the same host/domain. The interesting thing about cookies, is they are typically set by server code. A typical flow would be like this:

  1. A browser makes a request for a web page.
  2. The server receives the request, and sets a cookie.
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value.
  5. The browser sends the cookie data on every subsequent request to the server.
  6. The server uses the cookie data.

Now the interesting thing with cookies, is they are limited by domain. What's even more interesting, is that cookies can be set up to apply to any subdomain within the master domain. So if set up properly, the browser will not only send a cookie on every request back to the same server, the browser will also send that cookie to any other server that has the same domain. So it would look something like this:

  1. A browser requests a page from
  2. The server receives the request, and sets a cookie for the master domain (
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value for (
  5. The browser sends the cookie data on every subsequence request to any server ending with (which includes,, etc.)
  6. Any of those servers can use the cookie data.

You can see where this is going.

So this is all well and good if you're hosting your code on a server with the proper domains setup, but what about when you're still developing and you want to test on localhost? Well localhost is its own domain? So if you're testing with something like, how are you going to get your localhost to set a cookie for

Ideally, you would want something like to point to your localhost.

Well turns out there are a lot of different ways to do this, but the quickest way is to edit your "hosts" file.

Warning: Editing your hosts file will alter the way your system works when it resolves a URL. So be very careful when editing this and when you're done testing, maybe revert it.

Because I typically don't want to mess with real websites in my browser, and because it doesn't really matter what subdomain I use, rather than pointing to my localhost, I chose to point to my localhost.

On a mac, you'll open up /etc/hosts and add this line to it:

With that line in place, now when I accessed, it would hit my localhost and with any cookies I set, I had access to the domain.

Since port 80 usually is privileged, you can run on port 80 by doing something like this:

sudo rails s -p 80

If you're using RVM like me, you'll have to do something like this:

rvmsudo rails s -p 80

(one catch to this is, it is now running under the 'root' user, so make sure your database can accept connections from 'root'. I had to add this as another user to my PostgreSQL database).

So you should now be able to fire up your Rails server and access it at "".

Alright, we're all set to move on.

AWS Signed Cookies

So when restrict access to S3 or Cloudfront files, Amazon provides you with two mechanisms to give temporary access to those files:

  • Signed URLs
  • Signed Cookies

Signed URLs can actually be applied to both S3 URLs and Cloudfront URLs, but for our example, we're only dealing with Cloudfront URLs. A Signed URL is basically a URL that you can provide the user that gives them temporary access to a resource. The way it works, is you write server side code to generate a URL that contains query string parameters specifying how long that URL is valid. When that URL hits the Amazon AWS servers, the Amazon servers check the URL's parameters, to see if the URL is valid. It checks if it's expired, and also checks a signature to ensure that the URL was created by a authorized party. The way this usually works, is your write server side code that has access to AWS private keys to create these special signed URLs. Amazon even provides libraries for Ruby to do this.

Signed Cookies are very similar (and as far as I know only apply to Cloudfront URLs). The idea of a signed cookie, is you create a cookie that contains a policy. That policy will specify what types of files the cookie applies to. Then when browser requests a URL from the Amazon servers, the Amazon servers will look at the cookie that comes along with the request (remember, cookies are sent automatically with every request to the same master domain), and take a look at the policy. If the policy allows the URL that the browser is requesting, then Amazon will send back the resource successfully. For security reasons, the Amazon servers will also check that the cookie was created by an authorized party. In this case, this usually works by having your server code create a cookie using Cloudfront private keys. Again, Amazon provides Ruby libraries to do this.

The big advantage with SIgned Cookies, is you specify a policy that can encompass more than 1 file. So it's an easy way to give access to an entire set of files. This is particularly important when it comes to streaming files because when you stream a video, you're actually requesting access to multiple files (10 second chunks for example). So rather than creating a Signed URL for every one of those chunks, you can create a cookie that grants access to all those files.

So for this solution, we'll set up signed cookies for users that are authenticated. But to create these Signed Cookies, our server side code has to be authorized to do so. How do we authorize our server code to create cookies? We use Cloudfront private keys.

Cloudfront Key Pairs

If anybody could randomly create a signed cookie, it wouldn't really be protected. In fact, the "signed" part is what makes is protected. Only people authorized to create cookies can create the cookies that will work when the Amazon servers do their check. To make your Rails code authorized, they will need access to Cloudfront keys. To do this:

  1. Log in to Amazon AWS
  2. Click on your username in the upper-right and select "My Security Credentials".
  3. Ignore the warning about IAM by clicking "Continue to Security Credentials" as Cloudfront keys only work at the User Account level.
  4. Expand the "Cloudfront key pairs" section.
  5. Click on "Create New Key Pair".
  6. The pair will be created and present you with options.
  7. Download the PRIVATE key file.
  8. Then click "close".
  9. You'll be brought back to your list of Keys. You should also see the "ACCESS KEY ID". Keep this window open as you'll need that value.

We then put the private key file into our source, but be warned that this file should not be accessible to the public. So if you're hosting your source code in a public repository, you'll want to find somewhere else to put this file. Because our source code is private, we put the private key in /railsapproot/cloudfront.

Creating a signed cookie in Rails

Ok, so we have our private key and access key ID ready to use so we can properly create signed cookies. Let's put these to use.


First, grab the 'aws-sdk' gem. I used version 3 of the SDK. In your Gemfile:

gem 'aws-sdk', '~> 3'


Second, let's set up a global Cookie signer to use in our app. Create an initializer config/initializers/aws.rb and put this code in it:


You'll fill in your key_pair_id with the ACCESS KEY ID from the previous step. For the private_key_path, type the path to where you saved the private key file. The Access Key Id might work better as an environment variable as well. So you might have something more like:


before_action to create the cookie

So, the next question is, when do you want to actually create the cookie? My first approach was that after the user signed in, right there and then create the cookie. That seemed smart. The thing is, if they signed in and left their browser for a long time, the cookie might expire and then they'd have to sign out and sign back in. You could manage this by signing them out, but I decided that was too complicated for my use case. You can definitely do it that way, but here's what I decided to do.

I decided to write a before_action for all actions that checks if the user is signed in. Then, if the user is signed in, I set the cookie. This way, every request they perform while they're signed in just ends up renewing the cookie. The only catch to this is ensuring you clean up the cookie when they sign out.

So here's what my application controller looked like:

class ApplicationController < ActionController::Base before_action :set_cloudfront_signed_cookie ... private def set_cloudfront_signed_cookie if user_signed_in? cookies_values = CF_COOKIE_SIGNER.signed_cookie("", policy: policy) cookies_values.each do |k, v| cookies[k] = { value: v, expires: 10.minutes.from_now, domain: :all } end end end ... end

So this runs before every action. If the user is signed in, we create a signed cookie using the CF_COOKIE_SIGNED from the AWS SDK. That will spit back a hash of values that we have to write to the clients cookies. For each cookie value, we set it to expire after 10 minutes, and we also specify the very important domain: :all. What that argument does is sets the cookie for "" rather than "". Once you do that, those cookie values will also be sent with requests made to "" and "".

Let's take a closer look at the initial call to CF_COOKIE_SIGNER.signed_cookie.

First of all, you'll see I've passed in a to the method. This is not for demonstration purposes, and not a mistake. This is literally the code I use and I'll tell you why. If you're passing in a custom made policy to this method, the URL parameter of this method doesn't matter at all. So I purposely put "" to let other developers know that this has nothing to do with this all working.

Now, what I just said is that that URL is ignored if you pass in a custom policy. So that's what the second argument is policy. Let's take a look at that method below which is also in the application controller as a private method:

class ApplicationController < ActionController::Base ... private def policy resource = "http*://video**" expiry = 10.minutes.from_now { "Statement" => { { "Resource" => resource, "Condition" => { "DateLessThan" => { "AWS:EpochTime" => expiry.utc.to_i } } } } }.to_json.gsub(/\s+/, '') end end

This is the policy that is included in the Cookie that the Amazon servers will check when the browser makes a request. The expiry specifies how long the cookie is valid for. Remember, we call this every time a signed in user takes action, so this will get renewed every time they browse to a page. What more important here is the way that the resource string is constructed. Amazon allows you to put wildcards in the resource URL. This is the key to the policy working for multiple files (and multiple servers for that matter).

Let's break down the three wildcards. First you have

This is optional, but basically allows secure and non-secure requests. That is, it will allow the browser to request "http://" and "https://".

Secondly, we have the host:

What's nice about this, is this will allow the cookie to work for both our streaming Cloudfront instance and our single-file Cloudfront instance. That is, it will work for both "" and "".

And lastly we have the path:

That allows the cookie to apply to basically any file hosted on those servers.

Alright that's it. Put that into your code, sign in and browse to a page. Your cookies should now be set. It's really easy to see these in Chrome. Just open your inspector tools and go to the Application tab. Open up your cookies and you should see cookies for your domain.

You'll see that the domain on the Cloudfront cookies are "".

The HTML Code

With your cookies in place ready to be sent with your video-stream and video-file requests, you're ready to cap it all off. Code a page, and put this in:

  <source src="" />
  <source src="" />
  Your browser does not support HTML5 video.

Feel free to add thumbnails generated by AWS and controls:

<video width="100%" controls poster=""> <source src="" /> <source src="" /> Your browser does not support HTML5 video. </video>

And that's pretty much it! Everything should be working.

If you want to make sure it's secure. Grab the mp4 URL, log out of your app, wait 10 minutes for the cookie to expire, and then paste that URL into your browser, you should get an error.

Finishing it all off, cleaning up your cookies

Now, because of my decision to renew the cookie on every request, it's a good idea to kill the cookie right after a user logs out. So wherever you log out, for me I use Devise and override the SessionsController#destroy and do this:

class SessionsController < Devise::SessionsController def destroy clear_cloudfront_cookies super end private def clear_cloudfront_cookies cookies.delete("CloudFront-Key-Pair-Id", domain: :all) cookies.delete("CloudFront-Policy", domain: :all) cookies.delete("CloudFront-Signature", domain: :all) end end

It's VERY important that you specify domain: :all, because that's how you set up the cookies. Otherwise, it won't delete properly.


With that, you now have a great video solution! Congrats. This took me 3-4 solid days of debugging to get through so hopefully this helps us some peeps. The great thing about the AWS Answer, is that it's nice infrastructure for uploading, transcoding and serving the files. So you can in the future build an interface for users to upload files. Once they're uploaded to the S3 source bucket, they will automatically get transcoded and then you can inspect the Dynamo DB programmatically to serve them up.

Our project didn't require that level of sophistication, but it's good to know we have it in our back pocket if we need it. With Cloudfront and streaming files, you know you're serving your users fast and with minimal data to view the video.

Always room for improvement, so be sure to let us know if you have anything to add to this. (at the time or writing this Blog we don't have comments implemented, but they will be coming soon).

Thanks for reading.

Casey Li
CEO & Founder, BiteSite