Serving Videos to Authenticated Users using Amazon AWS and Ruby on Rails

amazon aws video coding software ruby on rails

Update: The original posting of this article left out two key points that have to do with serving your videos over SSL and allowing your second Cloudfront instance to access your S3 MP4 bucket. Scroll down to the section on 'Secondary Cloudfront instance' for the details.

So we recently got a project to do something we have never done before: create a web application that would only serve videos to authenticated users.

It's a pretty common use case we run into a lot: if you're logged in as someone who is authorized to view the videos, you can click on a page and watch that video. If you're not, then you can't view that page, NOR can you copy the link to that video to watch that video. Now as common as this is to use, it isn't super common to develop. At least not for us.

Yes, you can use streaming services like Vimeo etc., but what if your client wants their own custom solution. This is the challenge we were faced with.

And it turns out, there were really two things we had to solve:

  • Serving streaming video on-demand
  • Securing the streaming video to only authenticated users

(Just a small note, usually video-streaming breaks down into two categories: live or on-demand. Live streaming is when you are shooting video and streaming it to users' devices at the same time. On-demand is like traditional YouTube, Vimeo, or Netflix - where your users' are watching pre-recorded video. For our project, we were implementing on-demand video).

Ruby on Rails is our development web framework of choice, and AWS S3 is our asset storage of choice. So we knew the solution involved some combination of those two.

Now, the thing is - a good chunk of this is documented around the web, but I didn't come across a post that showed the end-to-end solution for Rails. So here you go.

Amazon Web Services

If you don't already know, Amazon has this entire part of its business called Amazon Web Services separate from its consumer facing online shopping platform. Amazon Web Services is a collection of services that Amazon provides to developers to help them develop applications. They have everything from virtual servers, to databases, to media encoding, to storage systems. Each one of these has their own name. For example, the storage solution is called "S3". You can do some more reading on Amazon Web Services on their official home page.

AWS Answers

When I started this project, I knew nothing about streaming video to devices. I originally thought the easiest thing to do would be to upload a video to Amazon S3, and just have a link to that video in the HTML code. As I started to research hosting video on AWS though, it turns out that is not a good solution. With that solution, you force every user to download the entire video, scrubbing back and forth is not ideal, and it's not true streaming in the sense that it's not downloading small packets of the video.

The preferred solution involves encoding your video into streamable chunks and serving those to the customer and only if their browser doesn't support streaming do you serve them the entire file. This solution also helps with dynamically changing the quality depending on the user's connection speed. So the first thing to consider is encoding your video into multiple formats that browsers support to optimize the video viewing experience.

On top of that, Amazon recommends that you consider using their CDN service, Cloudfront, to serve your assets to your users. What a CDN does is effectively copy your resources to multiple servers so that when a browser accesses your resource, it grabs it from a server that geographically closer to it. This ensure fast response and load balancing between all your different users. So the second thing to consider is setting up Cloudfront to serve your videos.

To do all this, there are actually a lot of moving parts and a lot of complexity involved. The great thing is, Amazon actually does supply all the services needed to execute this, but the question is how do you set it all up?

Well that's where AWS Answers comes in.

AWS Answers is a collection of solutions to common problems. So let's say you wanted to build a "Internet of Things" solution. AWS Answers has a solution for that to setup everything you need to get up and running. Let's say you wanted to create a backend server for a mobile app - well there is an AWS answer for that as well.

AWS Answers comes with documentation such as guides and FAQs about how to setup everything you need. For example, it may tell you, "You should set up a AWS Dynamo DB and a AWS S3 bucket...". But the coolest thing about AWS Answers, is the solutions also come with automatic deployment scripts. This means that you can click a button, fill out a couple of fields, and then boom - AWS automatically sets up everything you need. It's pretty amazing.

And guess what? There is an AWS Answer for On-demand Video Streaming.

Video On-Demand on AWS

So this article won't explain all the details of the "Video On-Demand on AWS" AWS Answer, but I will break down the basics of all the moving parts. When you deploy this AWS Answer, here are some of the major parts that get setup for you:

  • S3 Buckets (both for the original video files, and the transcoded files)
  • Dynamo DB (a database to keep track of your video files)
  • MediaConvert (to trancode your actual videos)
  • CloudFront (to serve your files to your users)

The AWS Answer actually sets up Lambda functions and Step functions as well, but I want to concentrate on the major parts in this article. You can read about everything else on the AWS Answer Page.

The basic workflow is this.

  1. You upload a video file into one of the S3 buckets that the AWS Answer set up for you (the source bucket).
  2. The bucket is setup to automatically run a transcode job on any video files in that bucket.
  3. The transcode job starts to transcode your video file into appropriate streaming formats.
  4. The transcode job drops its completed files (transcoded video files and thumbnails) into another S3 bucket (the destination bucket) that the AWS Answer set up for you.
  5. The newly transcoded video files are now available to the Cloudfront instance that the AWS Answer set up for you.
  6. You put the Cloudfront URL to your video into your code.

That is the basic setup. So once the AWS Answer is setup, you literally just drop files into the source S3 bucket, AWS does the rest, and provides you with a URL for your video that you can put into your code.

All is good? We're done right?

Not quite.

Tweaks to "Video On-Demand on AWS"

So the Amazon AWS answer is great, but it's not exactly perfect for everybody, and it definitely was not perfect for us. As we went down the road of putting the Cloudfront links on our code, we ran into a lot of issues and it turns out the solution was to tweak some of the services and configuration that the AWS Answer setup for us.

Here are the two major changes we made.

H.264 Encoding

So by default, the AWS Answer setup up a couple of encodings for the videos. When you drop a video into the source S3 bucket, AWS transcodes your file into multiple formats. The first set of these formats are all streaming formats, and then it also transcodes your video into a "single-file" format for browsers that don't support the streaming formats.

For streaming, the AWS Answer sets up encodings for HLS and DASH of various resolutions. For the "single-file" format, the AWS Answers sets up encodings for H.265 HEVC of various resolutions. If you're curious, you can actually go into your "Media Convert" page, and click on "Output Presets" to see this list:

Keep in mind, that if browsers support the streaming formats, they don't care about the single-file format. It's only the browsers that don't support the streaming formats that care about the single-file format.

The streaming formats are actually great and work with browsers like Safari. The problem is with the "single-file" format. Most browsers that don't support streaming formats, like Chrome, don't support HEVC H.265 either. So our backup single-file format wouldn't work.

So the first change we made to the default solution was change the MP4 output presets.

We changed the Video codec to "MPEG-4 AVC (H.264)", left everything else as default, and filled in the bitrate to be the same as before. "8500000" for 1080p and "6500000" for 720p. We also updated the name of the preset output so that it said "AVC" instead of "HEVC".

Now files that were dropped into the source S3 bucket would get converted to the HLS and DASH streaming formats as well as a H.264 single-file format.

Secondary Cloudfront Instance

Out of the box, the AWS Answer sets up one Cloudfront instance to point to the S3 destination bucket. To be more precise, it's setup to point to the S3 bucket used for the streaming video files output from the transcoding job. Your single-file H.264 files actually get put into an entirely different S3 bucket.

Since we wanted to serve both, we had to set up one more Cloudfront instance that pointed to the S3 MP4 bucket.

So our set up in the end

  1. Cloudfront Instance 1 pointed to S3 Bucket for Streaming Files (HLS, DASH). The name of this bucket has "abrdestination" in its name, ABR standing for "adaptive bitrate".
  2. Cloudfront Instance 2 pointed to S3 Bucket for Single-file Files (H.264). The name of this bucket has "mp4destination" in its name.

Update: Setting up Access from your Secondary Cloudfront Instance

When the AWS Answer setup your 'mp4destination' bucket, it by default blocks access. So you need to allow access from your Cloudfront Instance 2 to your 'mp4desination' bucket. By default, the AWS Answer already sets this up between Cloudfront Instance 1 and the 'abrdestination' bucket. To set up this access, we actually need what's called a 'Origin access identity'. Luckily, we can just use the one that was already set up between the Cloudfront Instane 1 and the 'abrdestination'. If you log into AWS, go to your Cloudfront console, you'll see on the left 'Origin Access Identity'. If you click on it, you'll see the 'VOD on AWS' user that was set up to allow access between the Cloudfront Instance 1 and the 'abrdestination'. Again, we are going to re-use this for Cloudfront Instance 2.

To do this, click on your Cloudfront Instance 2 and then click on the 'Origin' tab. Check the mp4 origin, and then click the 'Edit' button. Under 'Origin Access Identity', choose 'Use Existing' and select the 'VOD on AWS' user. You'll also want to select 'Yes, Update Bucket Policy' for 'Grant read permissions on Bucket'. Click 'Yes, Edit'. Saving changes like this usually takes a while to deploy so monitor the main Cloudfront console to see when the changes have taken effect.

You can also check out the Bucket Policy of the mp4 bucket to make sure that Amazon correctly added permissions for the 'VOD on AWS' account.

Alright, now your Cloudfront Instance 2 has proper permissions to the mp4 bucket.

Update: Setting up SSL for your Cloudfront instance

If you plan to serve your videos over SSL, you can provision a SSL certificate for your Cloudfront instance. To do this, log into AWS and go to the 'Certificate Manager' console. From there, you can request a SSL certificate. For our example, we would be requesting two SSL certificates, one each for


The one catch is you'll have to either have admin e-mail access for your domain or have access to the DNS records for your domain. After you verify ownership of your domain, the certificate will be issued. At that point, you can go back to Cloudfront, click on your cloudfront instance, click 'edit', and select the newly issued SSL certificate.

Putting your videos into Code

With all that set up properly, you are now ready to put your videos into your code. Specifically, you'll be putting them into some HTML5 video tags.

There are a couple of ways to get your URLs. You can log into AWS and go to your Dynamo DB. From there, you can browse your items and you'll see your HLS URL.

But in general, you can also browse your S3 destination buckets, and you'll end up with URLs similar to this:


Note that the .m3u8 and .mp4 files are served on different Cloudfront instances, so the subdomain will be different. Also notice that for the .mp4 file, you'll have to choose either the 1080p or 720p file to serve up.

Once you have those URLs, you can put them in your HTML:

<video width="100%" controls> <source src="https://<cloudfront-id-for-instance-1><id-of-job-in-s3>/hls/<video-file-name>.m3u8" /> <source src="https://<cloudfront-id-for-instance-2><id-of-job-in-s3>/mp4/<video-file-name>_720p.mp4" /> Your browser does not support HTML5 video. </video>

And with that, you have solved the first part of the problem: serving on-demand streaming video to your users.

Now, the question is, how do you protect it to only authenticated users.

Blocking Public Access

The first part is quite simple. You want to start off by blocking public access to your URLs. The AWS Answer by default makes your S3 buckets private, so you should be ok on that front. Users will not be able to directly paste a S3 URL in to their browser and watch a video.

However, the Cloudfront instances are setup with access to the S3 buckets, and they in turn serve up the files publicly. So while users can't access your S3 bucket files publicly, they can certainly access the resources through Cloudfront.

So our first step, is to update Cloudfront's behvaiour.

  1. Log in to AWS, go to Cloudfront, and take a look at your instances.
  2. Click on your first instance.
  3. Click on the Behaviors tab.
  4. You should see a row for the Default(*) path pattern. Check it and then click "Edit" above.
  5. Set "Restrict Viewer Access" to "Yes"
  6. Click "Yes, Edit"
  7. Repeat for your second Cloudfront instance.

Now, one thing to note. These changes don't take effect immediately. They take some time. Back on your Cloudfront main page where you see the listings of instances, you'll see a status column. If you've just made these changes, the status will probably be "In progress". You'll have to wait until this says "Deployed" before any of this works.

Once it's deployed, try pasting one of your video URLs into your browser. You should see something like this:

That's a good thing! Now users can't just copy and paste your URL and share it with other users.

You've successfully blocked public access. So how do you give access to authenticated users now?

Domain setup

So for the next part, you will unfortunately have to have access to your domain registrar or DNS servers. If you don't know that is, you'll basically need access to point your domains to certain servers. It's important that the website you are serving your videos on has the same domain as the cloudfront servers for this to all work.

This is how I have it setup:

DomainPoints to
www.bitesite.caMain website
video-stream.bitesite.caCloudfront Instance 1
video-file.bitesite.caCloudfront Instance 2

For Cloudfront, you'll have to setup CNAME records and you'll have to log into AWS, and configure your Cloudfront instances' alternate domain names. You can do this by clicking on the instance, and on the general tab, click "Edit". Again, you'll have to wait until the status is "Deployed" before all this starts working.

The domain setup here is absolutely crucial as we'll be using cookies. Cookies do heavily depend on the domains that you are visiting.

Obviously you'll adapt this for your own domain.

Local Testing

In the next step, we'll start talking about cookies. For those who don't know, cookies are basically an collection of key-value pairs that generally get sent with every request to the same host/domain. The interesting thing about cookies, is they are typically set by server code. A typical flow would be like this:

  1. A browser makes a request for a web page.
  2. The server receives the request, and sets a cookie.
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value.
  5. The browser sends the cookie data on every subsequent request to the server.
  6. The server uses the cookie data.

Now the interesting thing with cookies, is they are limited by domain. What's even more interesting, is that cookies can be set up to apply to any subdomain within the master domain. So if set up properly, the browser will not only send a cookie on every request back to the same server, the browser will also send that cookie to any other server that has the same domain. So it would look something like this:

  1. A browser requests a page from
  2. The server receives the request, and sets a cookie for the master domain (
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value for (
  5. The browser sends the cookie data on every subsequence request to any server ending with (which includes,, etc.)
  6. Any of those servers can use the cookie data.

You can see where this is going.

So this is all well and good if you're hosting your code on a server with the proper domains setup, but what about when you're still developing and you want to test on localhost? Well localhost is its own domain? So if you're testing with something like, how are you going to get your localhost to set a cookie for

Ideally, you would want something like to point to your localhost.

Well turns out there are a lot of different ways to do this, but the quickest way is to edit your "hosts" file.

Warning: Editing your hosts file will alter the way your system works when it resolves a URL. So be very careful when editing this and when you're done testing, maybe revert it.

Because I typically don't want to mess with real websites in my browser, and because it doesn't really matter what subdomain I use, rather than pointing to my localhost, I chose to point to my localhost.

On a mac, you'll open up /etc/hosts and add this line to it:

With that line in place, now when I accessed, it would hit my localhost and with any cookies I set, I had access to the domain.

Since port 80 usually is privileged, you can run on port 80 by doing something like this:

sudo rails s -p 80

If you're using RVM like me, you'll have to do something like this:

rvmsudo rails s -p 80

(one catch to this is, it is now running under the 'root' user, so make sure your database can accept connections from 'root'. I had to add this as another user to my PostgreSQL database).

So you should now be able to fire up your Rails server and access it at "".

Alright, we're all set to move on.

AWS Signed Cookies

So when restrict access to S3 or Cloudfront files, Amazon provides you with two mechanisms to give temporary access to those files:

  • Signed URLs
  • Signed Cookies

Signed URLs can actually be applied to both S3 URLs and Cloudfront URLs, but for our example, we're only dealing with Cloudfront URLs. A Signed URL is basically a URL that you can provide the user that gives them temporary access to a resource. The way it works, is you write server side code to generate a URL that contains query string parameters specifying how long that URL is valid. When that URL hits the Amazon AWS servers, the Amazon servers check the URL's parameters, to see if the URL is valid. It checks if it's expired, and also checks a signature to ensure that the URL was created by a authorized party. The way this usually works, is your write server side code that has access to AWS private keys to create these special signed URLs. Amazon even provides libraries for Ruby to do this.

Signed Cookies are very similar (and as far as I know only apply to Cloudfront URLs). The idea of a signed cookie, is you create a cookie that contains a policy. That policy will specify what types of files the cookie applies to. Then when browser requests a URL from the Amazon servers, the Amazon servers will look at the cookie that comes along with the request (remember, cookies are sent automatically with every request to the same master domain), and take a look at the policy. If the policy allows the URL that the browser is requesting, then Amazon will send back the resource successfully. For security reasons, the Amazon servers will also check that the cookie was created by an authorized party. In this case, this usually works by having your server code create a cookie using Cloudfront private keys. Again, Amazon provides Ruby libraries to do this.

The big advantage with SIgned Cookies, is you specify a policy that can encompass more than 1 file. So it's an easy way to give access to an entire set of files. This is particularly important when it comes to streaming files because when you stream a video, you're actually requesting access to multiple files (10 second chunks for example). So rather than creating a Signed URL for every one of those chunks, you can create a cookie that grants access to all those files.

So for this solution, we'll set up signed cookies for users that are authenticated. But to create these Signed Cookies, our server side code has to be authorized to do so. How do we authorize our server code to create cookies? We use Cloudfront private keys.

Cloudfront Key Pairs

If anybody could randomly create a signed cookie, it wouldn't really be protected. In fact, the "signed" part is what makes is protected. Only people authorized to create cookies can create the cookies that will work when the Amazon servers do their check. To make your Rails code authorized, they will need access to Cloudfront keys. To do this:

  1. Log in to Amazon AWS
  2. Click on your username in the upper-right and select "My Security Credentials".
  3. Ignore the warning about IAM by clicking "Continue to Security Credentials" as Cloudfront keys only work at the User Account level.
  4. Expand the "Cloudfront key pairs" section.
  5. Click on "Create New Key Pair".
  6. The pair will be created and present you with options.
  7. Download the PRIVATE key file.
  8. Then click "close".
  9. You'll be brought back to your list of Keys. You should also see the "ACCESS KEY ID". Keep this window open as you'll need that value.

We then put the private key file into our source, but be warned that this file should not be accessible to the public. So if you're hosting your source code in a public repository, you'll want to find somewhere else to put this file. Because our source code is private, we put the private key in /railsapproot/cloudfront.

Creating a signed cookie in Rails

Ok, so we have our private key and access key ID ready to use so we can properly create signed cookies. Let's put these to use.


First, grab the 'aws-sdk' gem. I used version 3 of the SDK. In your Gemfile:

gem 'aws-sdk', '~> 3'


Second, let's set up a global Cookie signer to use in our app. Create an initializer config/initializers/aws.rb and put this code in it:


You'll fill in your key_pair_id with the ACCESS KEY ID from the previous step. For the private_key_path, type the path to where you saved the private key file. The Access Key Id might work better as an environment variable as well. So you might have something more like:


before_action to create the cookie

So, the next question is, when do you want to actually create the cookie? My first approach was that after the user signed in, right there and then create the cookie. That seemed smart. The thing is, if they signed in and left their browser for a long time, the cookie might expire and then they'd have to sign out and sign back in. You could manage this by signing them out, but I decided that was too complicated for my use case. You can definitely do it that way, but here's what I decided to do.

I decided to write a before_action for all actions that checks if the user is signed in. Then, if the user is signed in, I set the cookie. This way, every request they perform while they're signed in just ends up renewing the cookie. The only catch to this is ensuring you clean up the cookie when they sign out.

So here's what my application controller looked like:

class ApplicationController < ActionController::Base before_action :set_cloudfront_signed_cookie ... private def set_cloudfront_signed_cookie if user_signed_in? cookies_values = CF_COOKIE_SIGNER.signed_cookie("", policy: policy) cookies_values.each do |k, v| cookies[k] = { value: v, expires: 10.minutes.from_now, domain: :all } end end end ... end

So this runs before every action. If the user is signed in, we create a signed cookie using the CF_COOKIE_SIGNED from the AWS SDK. That will spit back a hash of values that we have to write to the clients cookies. For each cookie value, we set it to expire after 10 minutes, and we also specify the very important domain: :all. What that argument does is sets the cookie for "" rather than "". Once you do that, those cookie values will also be sent with requests made to "" and "".

Let's take a closer look at the initial call to CF_COOKIE_SIGNER.signed_cookie.

First of all, you'll see I've passed in a to the method. This is not for demonstration purposes, and not a mistake. This is literally the code I use and I'll tell you why. If you're passing in a custom made policy to this method, the URL parameter of this method doesn't matter at all. So I purposely put "" to let other developers know that this has nothing to do with this all working.

Now, what I just said is that that URL is ignored if you pass in a custom policy. So that's what the second argument is policy. Let's take a look at that method below which is also in the application controller as a private method:

class ApplicationController < ActionController::Base ... private def policy resource = "http*://video**" expiry = 10.minutes.from_now { "Statement" => { { "Resource" => resource, "Condition" => { "DateLessThan" => { "AWS:EpochTime" => expiry.utc.to_i } } } } }.to_json.gsub(/\s+/, '') end end

This is the policy that is included in the Cookie that the Amazon servers will check when the browser makes a request. The expiry specifies how long the cookie is valid for. Remember, we call this every time a signed in user takes action, so this will get renewed every time they browse to a page. What more important here is the way that the resource string is constructed. Amazon allows you to put wildcards in the resource URL. This is the key to the policy working for multiple files (and multiple servers for that matter).

Let's break down the three wildcards. First you have

This is optional, but basically allows secure and non-secure requests. That is, it will allow the browser to request "http://" and "https://".

Secondly, we have the host:

What's nice about this, is this will allow the cookie to work for both our streaming Cloudfront instance and our single-file Cloudfront instance. That is, it will work for both "" and "".

And lastly we have the path:

That allows the cookie to apply to basically any file hosted on those servers.

Alright that's it. Put that into your code, sign in and browse to a page. Your cookies should now be set. It's really easy to see these in Chrome. Just open your inspector tools and go to the Application tab. Open up your cookies and you should see cookies for your domain.

You'll see that the domain on the Cloudfront cookies are "".

The HTML Code

With your cookies in place ready to be sent with your video-stream and video-file requests, you're ready to cap it all off. Code a page, and put this in:

  <source src="" />
  <source src="" />
  Your browser does not support HTML5 video.

Feel free to add thumbnails generated by AWS and controls:

<video width="100%" controls poster=""> <source src="" /> <source src="" /> Your browser does not support HTML5 video. </video>

And that's pretty much it! Everything should be working.

If you want to make sure it's secure. Grab the mp4 URL, log out of your app, wait 10 minutes for the cookie to expire, and then paste that URL into your browser, you should get an error.

Finishing it all off, cleaning up your cookies

Now, because of my decision to renew the cookie on every request, it's a good idea to kill the cookie right after a user logs out. So wherever you log out, for me I use Devise and override the SessionsController#destroy and do this:

class SessionsController < Devise::SessionsController def destroy clear_cloudfront_cookies super end private def clear_cloudfront_cookies cookies.delete("CloudFront-Key-Pair-Id", domain: :all) cookies.delete("CloudFront-Policy", domain: :all) cookies.delete("CloudFront-Signature", domain: :all) end end

It's VERY important that you specify domain: :all, because that's how you set up the cookies. Otherwise, it won't delete properly.


With that, you now have a great video solution! Congrats. This took me 3-4 solid days of debugging to get through so hopefully this helps us some peeps. The great thing about the AWS Answer, is that it's nice infrastructure for uploading, transcoding and serving the files. So you can in the future build an interface for users to upload files. Once they're uploaded to the S3 source bucket, they will automatically get transcoded and then you can inspect the Dynamo DB programmatically to serve them up.

Our project didn't require that level of sophistication, but it's good to know we have it in our back pocket if we need it. With Cloudfront and streaming files, you know you're serving your users fast and with minimal data to view the video.

Always room for improvement, so be sure to let us know if you have anything to add to this. (at the time or writing this Blog we don't have comments implemented, but they will be coming soon).

Thanks for reading.

Casey Li
CEO & Founder, BiteSite

What is a MVP or Minimum Viable Product?

process methodology software

When it comes to building software, whether you’re building it yourself or hiring a custom software services company to help you out, an increasingly important topic to understand is the Minimum Viable Product or MVP for short.

Before we dive into it, first you should know that MVP has a few names. In fact the first time I came across it, I was actually introduced to it as MMP or Minimal Marketable Product. I read about it in an amazing book called Agile Product Management with Scrum.

Whatever it’s called, MVP embodies a very important philosophy when it comes to software development and can actually be applied to other fields.

In this article, we’ll dissect MVP and explain how it’s one of the best things you can do when starting a new product, a new feature, or new anything :)


Let’s start with some background on software development. Back in the day, software used to be developed in what was called a Waterfall model which was most likely adopted from other engineering disciplines. From a high level, it would look like this:

  • Domain Analysis
  • Requirements Gathering
  • Design
  • Implementation
  • Testing
  • Delivery

If you were building a software product, you might spend a month or so analyzing your domain - in other words, understanding the world that your users inhabit and the lives they live. Then you’d move onto requirements gathering where you would meticulously define every aspect of your application and list out every feature that your users wanted or would want. You would spend a good amount of effort fleshing out these requirements and getting every detail right. Next up would be design. Based on requirements, you would start to layout the user interface and the software architecture to build your application. After your team approved all that, you would move onto coding the application, testing it and finally delivering it to your users.

From start to finish, you would probably be spending anywhere between 6 months and a few years before you actually presented your users with a usable product.

If you happen to get all the steps right leading up to the release, you’d be a genius. However, the reality is it’s incredibly hard and rare to get every step right without getting proper, genuine feedback. This is the problem with Waterfall.

You may ask your potential users for feedback along the way, you may show them wireframes, you may show them mockups, but until you put an actual usable piece of software in their hands, at no fault of their own, they won’t give you genuine feedback.

How many times has someone looked at a proof of concept and said “Wow, that’s great - let’s move forward!”. Then, when the real product is put in their hands, they figure out all these issues with it.

Therein lies the problem.

It’s hard for anybody to truly evaluate a product based on documentation, meetings, discussions, wireframes, or designs.

With Waterfall, what you’re left with is months, if not years of assumptions and predictions about how someone will feel about a product rather than genuine, reactive feedback.

It’s the genuine, reactive feedback that you want so that you know you are building a product or set of features that users will genuinely want and need. You want to get that as soon as possible and avoid long stints of assumptions and predictions that cost you both time and money.

The big advantage of software

As mentioned, Waterfall was adopted by the software industry most likely because it’s what other disciplines used. However, there is a big advantage that software has over things like civil or mechanical engineering.

Let’s use bridge building as an example. If you’re building a bridge, you should properly do a lot of upfront work, analysis, calculations, and small tests before you put your first user on it. When you open up a bridge for use - you get very little chance for error and it costs a lot of time and effort to redo anything.

This is not the case for most software.

For most software products, companies are given the chance to easily update and continually improve their product. With modern technologies like the internet, it’s incredibly easy for a software company to improve their software by deploying updates over time.

Basically, software companies are given many chances and opportunities to change their product based on user feedback.

This distinct advantage combined with the importance of genuine, reactive feedback gave rise to iterative development.

Iterative Development

Iterative development is a very simple concept. Rather than taking a product from start to finish and then leaving it alone, you build the product, get feedback for your users, and do it all over again.

However, a company does not have infinite resources, so something has to give. What changes is the amount of effort and time spent in each iteration.

In Waterfall, you might spend 2 years doing domain analysis, requirements gathering, design, and implementation before you push out a product to your users.

With iterative development, you do some version of that but on a way smaller scale. You do some basic research, some basic design, and push out the product in a matter of a few weeks if not less. If you can’t sacrifice the level at which you do those activities, then you reduce the scope of what you’re implementing.

Either way, the idea is you do smaller chunks of development and repeat. There are many philosophies and processes that help you execute small iterations of development like Scrum, Extreme Programming, and Agile Methodologies.

MVP or Minimum Viable Product

So now, you understand the reasoning behind iterative development. It’s a push to get genuine feedback as fast as you can so that you can iterate, and improve your product. So how does a Minimum Viable Product fit in?

An MVP is the product of your first iteration of development.

Remember, your goal is get user feedback as soon as possible. So an MVP is the smallest, simplest, most barebone version of your product you can come up with that will get you the feedback you need to proceed.

Let’s break that down a bit.

“Smallest, simplest, more barebone...”.

The idea here is that you want to reduce the time and effort as much as possible. The reality is until you get user feedback, everything you assume could be wrong. So spending more and more time in the assumptive phase can really hurt your success. You want to reduce that time so you can test your assumptions and ideas as fast as you can. So the philosophy here is how small can you make it.

“...that will get you the feedback you need...”.

This combats the idea of making things small. If your MVP is too small, too bare, it could be so unusable that you get no feedback at all. If there is zero attraction to even try out the feature or product then you’re going to get nothing back.

“ proceed.”

This last thought is really your ultimately goal. An MVP is all about getting to the next step. If you execute an MVP correctly, you’ll have enough information to help you make big decisions to move forward.

Deciding on what makes up your MVP is really an art in trying to balance reduced time and effort and creating something that people will actually use.

Applying MVP at all levels

MVP was originally used when talking about releasing the first version of your product. Since then, however, its principles have been applied at all levels.

For example, let’s say you’re creating a new feature. Rather than building the best, highest gloss version of that feature - you might scope it down and release the MVP version of that feature to get feedback on where to go next.

It’s become so common at our company that we start to use it as a verb. “Let’s MVP that feature!”

Later in this article, we’ll see how MVP can be applied to more than just software development.

How do you go about defining your MVP?

There are definitely a lot of strategies when it comes to defining an MVP but it doesn’t have to be a complicated process. In fact, at BiteSite, when we talk to our clients about “MVP’ing” their product or a new feature, it’s a pretty simple conversation.

When it comes to most people and how they think about their product or a new feature, nine times out of ten, they are already thinking bigger than an MVP. It’s just the nature of what happens when you think about your next great idea and get excited about what it could be.

So the process is very simple: go through every aspect of your product or feature and ask yourself “Do we really need this right now?”

Play your own devil’s advocate and you’ll be surprised how much you can cut out.

In fact, you may even be noticing these days that brand new products are missing some features that you would have thought would be no brainers. That’s probably a company putting out an MVP and seeing what how the masses react. Remember the first iPhone and it missing copy and paste?

When you develop a product, you’ll most likely have an endless backlog of features. MVP’s help get your the feedback you need to prioritize what’s important now versus what can be delayed until a later date.

Don't make your MVP too minimal

Something that I’ve recently learned that I’m guilty of is the practice of making your MVP too minimal. This will usually stem from the developer side rather than the product management side.

When I started implementing the MVP philosophy to our products and features, I made the mistake of always cutting out the same things. One thing, for example, I cut out all the time was UI design. I would always just say, let’s just get the basic data entry working with a basic UI first and see how users respond to it. The problem was, if the interface was really bad, users wouldn’t use it at all.

Remember, the goal is to get feedback from users.

Part of the delicate dance of figuring out your MVP is figuring out what it is that users will care about and what they won’t care about. So make educated guesses and don’t strip out too much.

Defining your MVP doesn’t always mean cutting out functionality

A lot of times when we talk to clients about MVP, it’s not always about cutting functionality out for the user - but rather replacing it with a non-software solution.

For example, you may say “I want my users to be able to reset their own passwords.” A developer comes back to you and says, “Well. Right now, I can manually reset the password, but we’d have to build out an interface for the user to reset it themselves.”

Depending on the product, a good MVP solution to this might be to keep this feature out, and for now just have the developer reset the password manually and have a human being e-mail that user manually.

In the beginning when you’re dealing with your first set of users, this might be a great way to start. You may find out after you release it that for the first year - no one has requested to reset their password. On the flipside, you might find out in the first week that everybody wants to reset their password and as a result you prioritize this feature.

That’s what MVP is all about. Start off with a small scope and don’t implement feature until you have good evidence that it’s needed.

You don’t have to get it right

The idea of an MVP is very consistent with a lot different processes, philosophies, and methodologies. You’ll read this a lot:

“It’s not about getting it right. It’s about moving forward.”

That’s the crux of it all. If you’re having a lot of trouble figuring out if your MVP is too small, or too big, or if you’re making the right assumptions or not - don’t worry about it too much. MVP is all about picking a path, implementing it, and testing it with real users. You’re bound to get some stuff wrong. But knowing you’re wrong based on feedback is way better than assuming you’re right or wrong.

Yes, it’s good to have informed discussions, it’s good to have opinions, and it’s good to make educated assumptions - but don’t dwell on this too long. Just move forward and get the feedback.

The Sprint Book by Jake Knapp really solidifies this concept and even assigns a “moderator” to the process to keep this in check. In fact, that book and its process in general is an amazing embodiment of MVP.

It’s all about the Feedback

By now you’ve gathered that MVP is all about feedback. The one thing to keep in mind, though, is that feedback can come in many different forms and they all have their place. The following is just a short list of different types of feedback you can aim for when finally releasing your MVP:

  • Direct Contact
    • After you’ve deployed your MVP, you can directly contact your users and ask them what they liked and what they didn’t like. This obviously is only suited for smaller number of users, but can be very helpful for companies starting out. One thing to keep in mind is to analyze the feedback rather than just follow it. Just because one person says they didn’t like your sign-in screen doesn’t mean everyone hates it.
  • Surveys
    • You could proactively reach out to some users and send a survey. There is a whole art to designing surveys, but even simple surveys can get you some great information.
  • Data and Usage Analytics
    • Data analytics is a great way to get unfiltered, honest, reactive feedback. When you talk to customers in person, they may hold back their true sentiments. Data and usage analytics let their actions do the talking. Take a look at how many people are actually using your feature or product and how they are using it. Tools like Google Analytics, New Relic, Skylight, and Mixpanel can help you with this.

MVP is just a start

The last thing I’ll mention about MVP is to remember that it’s a philosophy on how to start implementing your product or feature.

When I say scale back your product idea or vision - that’s only to implement your MVP and get going. By no means do I mean throw out your big ideas or vision. Keep those in your back pocket and let your MVP inform you as to whether or not you’re on the right track.

How can you apply this

You’ve learned a lot about what an MVP is. So how can you start applying what you’ve learned. Well, it depends on what you’re working on.

Are you thinking of building a new piece of software?

If you are, chances are you’ve already had a big vision in place. Keep that vision in your back pocket and start to think about your MVP. Put all your ideas on the wall, and start crossing off the ones you really don’t need to get going.

Our favourite clients at BiteSite are the ones who have thought about their MVP. They come in with a big inspiring vision, and then shortly after say “...but as a start, here’s what I envision.”

Personally, I love when I can envision a product that I can build in a couple of days that will instantly bring value.

Are you an established company with an established product?

If so, chances are you are considering developing new features. You can apply the MVP principles to your features. Scale them down to a small enough level that you can quickly implement them, deliver them, and get some informative feedback.

MVP for everything

What’s interesting is as I go further and further into my professional career, I find the principles behind MVP can be applied to a whole lot. The idea of implementing small and quick deliverables and then getting feedback is finding its way into a lot.

Currently, I plan on applying it to our sales process and identifying target markets and in the past I’ve applied it to small changes in our company. We would try small versions of our changes and if they failed, we’d scrap them. If they succeeded, we’d iterate and build on them.

MVP is an incredibly powerful idea that was introduced to me through software, but I’m finding more and more that it can be applied to almost everything.

Casey Li
CEO & Founder, BiteSite

What exactly is custom software?

business software

The world is full of software. Just take a moment to look around you and you’re probably surrounded with many examples. Whether it be the browser that you’re reading this in, the software you use at work, or the software that’s running inside your car, software is everywhere these days.

Most of the time, we are interacting with software built for the masses. We’ve got Instagram, Whatsapp, and Facebook for our social lives, we have Word, Excel, and Acrobat at work, and we have so much software behind so many devices we use everyday.

While these applications are incredible and improve our lives in so many ways, there are times where they aren’t quite what we’re looking for. Especially when it comes to work or running a business, sometimes the applications out there fall short in one way or another.

So what do you do?

You really have three options

  • Put up with existing software and live with its shortcomings
  • Wait for updates or a brand new application to come out that hits the mark
  • Build something yourself

Now when it comes to building it yourself, most of us don’t have the luxury of knowing how to program a piece of software.

That’s where custom software comes in.

Custom software is software that is built specifically for you, your business, your needs, and your wants rather than software that is built for the masses. It’s like the difference between getting a custom, bespoke suit made for you versus buying one off the shelf.

Typically, because a business doesn’t have the technical knowhow to build custom software themselves, they hire another company to build them a piece of software to solve a problem they are having. The company they hire is a custom software shop and the software they build is custom software.

The many forms of custom software

Custom software comes in many forms and is sold by many different types of companies. When it comes to the software itself, custom software shops analyze the problem and decide what’s the best technology to use. They may recommend a web application, a mobile application, a desktop application - or even recommend that custom software is not the way to go at all. While custom software is typically built from scratch, sometimes custom software solutions involve integrating existing applications.

If this is getting a little confusing, let’s look at an example of a good candidate for custom software. Let’s say you run a plumbing company. Today, you get appointments by having people call a phone number. The appointment leads to a service call that you fill out on paper and give to one of your plumbers. They complete the job, fill out the paper service call, come back to the office and file the final report.

You might think to yourself that it would be great if a lot of this was digitized and automated. You go to a custom software company, educate them about your workflow and they build you a custom mobile application for your plumbers who can receive service orders on their phones, fill out the reports on their phone, and have the data automatically sync to a back-end office application that you can view. They may even build you a web application that allows your customers to book online. That would be a great candidate for custom software.

Who sells custom software

When it comes to the companies that offer custom software services - there are a whole bunch and they call themselves all sorts of names. To make things more complicated, sometimes companies that are focused on other offerings may offer custom software solutions as a side service. For example, even though Marketing agencies are focused on marketing activities, they may still offer custom software services since that may play into their strategy. Below is a small list of types of companies that may offer custom software services:

  • Custom Software Shop
  • Custom Software Services Company
  • Software Firm
  • Software Consulting Agency
  • Software Consultant
  • Software Freelancer
  • Digital Agency
  • Marketing Agency
  • Web Design and Development Shop
  • Mobile Development Shop
  • Software Solution Firm

All these types of companies and more offer custom software.

When it comes to the service of custom software, typically companies do a lot. Among other things, they will analyze your problem, make recommendations, strategize with you, implement a robust development process, design the UX and UI, implement the code, and deliver the final product. In most cases, they will also maintain the software. By the end, if you’ve dealt with a good custom software shop, you’ll have a good sense of what it’s like to run your own software company.

Some great companies local to Ottawa that do custom software include Industrial, Netfore and BitHeads (Not to mention BiteSite :)). Outside of Ottawa, there are amazing companies like Thoughtbot and TWG.

So what?

Now that you know what custom software is - what is the big deal? We’ll be writing more and more articles on this subject, but custom software is all about solving a problem. By solving that problem, your company may get the edge on a competitor, your company may run more smoothly and efficiently, or your company just may experience more joy at work. Whatever it may be, it’s all about identifying problems that can be solved with software.

So are you a business owner? Spend 10 minutes thinking about your business and the challenges you face. Ask yourself could something be done better? Could you picture yourself using software to solve it?

If so, custom software might be the answer.

Casey Li
CEO & Founder, BiteSite

Our first hire is moving on

company business

When I started BiteSite, I was really excited to see what I could do with the company. As a business owner, I found myself constantly looking for milestones that led me closer to running a 'real' business. I remember when I registered the business name, I remember when my friend got me a Freshbooks ( trial for my birthday, and I remember purchasing the domain. Each step along the way - I felt more legit.

But the ultimate was when I made my first hire.

Building the team was always a huge importance to me and still is. Even though it went against business sense, I was adamant about hiring an employee rather than a contractor. I wanted the sense that this person was really part of BiteSite and would help shape the company. That first hire was Ryan O'Connor.

I met Ryan at the University of Ottawa when I was teaching a Ruby on Rails course there. The first thing that struck me about Ryan was the way he helped others. Although becoming rarer these days, programmers can sometimes develop an ego and Ryan was the complete opposite. He clearly understood things better than others but never let that get in the way of helping someone out.

We were lucky to have Ryan say yes to working at BiteSite. He was taking a chance as the first employee of a new startup. But he did join us in 2015 and has been killing it ever since.

Ryan has brought so much to BiteSite. He brought strong development practices from automated tests to up-to-date frameworks, he helped create a welcoming mentoring atmosphere to new employees, he embraced our 20% Free Time Fridays and developed a lot of amazing stuff. On alone, he brought some stand-out features like e-Commerce, the Activity Feed, and real-time Task updates. He has become an incredibly strong developer in Ruby on Rails and React - but overall just an outstanding engineer, employee, and person.

The biggest thing an employer can ask for of his/her employees is trust - trust that they'll do the right thing, trust that they'll express themselves, trust that they'll do their best - and with Ryan, we had that in spades.

Today is Ryan's last day at BiteSite and it was amazing to have the chance to work alongside such a great individual. He starts his next adventure at one of our former clients, Splice ( I know he'll do amazing things there and wherever he ends up in life.

Thank you, Ryan, for all that you've done in helping me realize my dreams.

Casey Li
CEO & Founder, BiteSite

The Pros and Cons of Custom Software

business software

When you run a business, you may inevitably hit a point where you decide that software can help. Whether it be a website for marketing purposes, an app to help with automating workflow, or a solution to give you an edge over your competition, software can solve many use cases and bring many benefits.

But as you start to navigate the world of software you may be faced with information overload and have trouble deciding which route to go.

When dealing with business problems that can be solved with software, generally your solutions fall into two major categories:

  • Existing Applications
  • Custom Software
    • Custom Software involves hiring a team to build a piece of software specifically for you and your needs. We sometimes refer to the companies behind these pieces of software as “Services companies”. BiteSite falls into this category along with TWG ( and Thoughtbot (

In this article, we explore some of the biggest pros and cons to choosing custom software as opposed to the alternatives.

PRO #1: Custom software solves your specific problem

The number one reason businesses choose custom software is that none of the alternatives truly solve their problem. Existing applications may come close, but may be missing one or two key features. A lot of times this happens when businesses are in very specific niches or have complex workflows that are not common. They’ll try what’s on the market and end up not satisfied with anything out there. Product companies typically develop software that suits large markets with common issues. If you find yourself in a smaller market that’s not served by existing applications, custom software is a great way to ensure that your chosen solution actually solves your specific problem.

PRO #2: Custom software is optimized for you

When exploring existing applications, they may solve your problem, but they may also solve 100 other problems. There are a few issues with this. First, all these extra features can clutter your experience and get in the way of what you really need. Secondly, all the extra features may actually be overwhelming leading to a frustrating user experience. And third, you may be paying for a lot of functionality that you’ll never use.

With custom software, in general, the features that go into the product are completely dictated by you and your business. This way you can ensure that only what you need goes into the product and nothing more. Furthermore, with a reduced feature set, you get the added benefit of quality over quantity.

PRO #3: Custom software teams are hired to work for you

When using existing applications, they may be incredibly powerful and feature rich, but consider for a second that they may also be serving hundreds of thousands of customers. 95% of the time the app may work great, but there may be 5% that doesn't work so great. So you call up the support team and you make a request to add a feature. The problem here is that your feature request might be competing with hundreds of thousands of other customer requests. While product companies will do their best to address your problem, the reality is they may not get to it for another year or more.

With custom software, you can be sure that your voice will be heard and that the features you want will be prioritized because you'll have a more direct line to the team and the team is generally serving fewer customers. The extra money that you pay for custom software gives you dedicated resources to solve your problems.

CON #1: Custom software is expensive

One of the biggest and most obvious reasons to NOT choose custom software is cost. Existing solutions can cost as little as $10/month, whereas custom software can run you literally tens of thousands of dollars a month. The best custom software is rarely a small 1 to 2 week project. Building custom software requires a solid understanding of the business and their problems, robust software practices, and dedicated resources. This all adds up pretty quick.

CON #2: Custom software takes longer to get up and running

Custom software is typically developed from scratch or at least from a basic framework. When it comes to features, not a lot comes for free. Thus, building everything to your expectation can take a long time. Even the most basic features we take for granted may take some time. For example, you may request the ability to allow users to log in. Simple enough right? Well, what if people forget their password? Ok, let's add a 'Forgot your password?' feature. Well, what if people want to delete their account? Ok, let's add the ability to delete your account and so on and so on. Each one of these features may take a significant effort to design and implement. With existing applications, you can be up and running in literally minutes.

CON #3: Custom software can suffer from lower quality

When talking about the pros - I mentioned that a reduced feature set can lead to higher quality. This is a result of the team spending more effort on fewer features. However, there is another factor that works in the opposite direction.

Generally, Product companies have way bigger teams. With services companies, you may have a team as small as 1 to 10 people working on your project. With product companies, you may have thousands or people working on a single product. Granted, typically product companies are working on products that are more complex, but having a larger, more diverse team can lead to higher quality software. This is not a certainty, but the best product companies in the world are typically doing some of the best work in the industry. That being said, service companies like TWG ( and Thoughtbot ( are really pushing the boundaries of software as well. And is some cases, product companies will acquire service companies because they are so strong in their field.

So, what should you do?

There are many more pros and cons aside from the ones I’ve discussed here, but the important thing is to choose the right tool for the right job. There are situations where existing applications are definitely the way to go, and there are other situations where custom software is the answer. Even though BiteSite is a custom software service company - many times we will forward potential clients onto existing applications because it makes more sense for them.

Deciding can be tough, but there is one saving grace: existing applications are generally cheap to experiment with. You could pay $10/month to try out a cloud product or even sign up for a free trial. If you’re not happy with it, you can start to explore a custom software solution. No huge loss.

The important thing is to do a little research up front before diving deep. And in my experience, people from both sides will give good advice even if that means directing you away from their business.

Casey Li
CEO & Founder, BiteSite