How much does custom software cost?

buiness software

If you’ve been reading our blog lately, chances are you’re interested in custom software. Recently we’ve written articles about what custom software is, the pros and cons of custom software, and how to get started with an MVP.

This may have peaked your interest a bit and it may have you considering your own project and which direction to go. In the early planning phases you might start to weigh out your options between the status quo, using a pre-existing application, or building some custom software.

There is a lot to consider when making your decision, but inevitably, one of the biggest criteria that will factor into your decision is cost.

Now, we briefly talked about cost in our Pros and Cons of Custom Software article and how custom software can be expensive. The question is, though, how expensive?

Invariably, when writing articles about cost you have to be careful. There are so many variables that factor into a given company and how much they charge and there are no blanket statements that apply to all companies. It’s probably a reason why most companies don’t talk about the subject publicly.

The reality is, for the same software output, you may pay someone $200.00 and you may pay someone else $20,000.00. The discrepancy can be that big. But you have to consider that for the same software output, you might not exactly be getting the same service. Sure you may end up with the same basic application, but what’s different? Does the person you’re paying have years of experience behind them? Do they have a team that will help you in case things go horribly wrong? Do they have processes that make the development cycles more efficient and less stressful?

These questions just scratch the surface of what can differ from vendor to vendor. But besides your project, you have to consider what the vendor has to account for. If the vendor is an individual working out of their home, they will obviously have way less to pay for than a team of 20 working out of an office. Any business owner will tell you how high costs can get.

Another factor is simply the types of clients that the vendor deals with. Some vendors deal with huge organizations that spend hundreds of thousands if not millions of dollars on custom software projects. These vendors will naturally price themselves for those customers. In some cases, it’s due to the fact that these types of customers have way higher demands, but in other cases, it’s simply what the client is used to paying.

So when thinking about the cost of custom software, you have to do a bit of thinking about anything you hear or read and understand that there is a LOT that goes into the number that someone gives you.

I can’t speak for other vendors and how they do things, but I can write about how we runs things at BiteSite. I would assume that most vendors follow something similar, but again, I’m only speaking from our experience.

The problem with Fixed-Price Contracts

When BiteSite started, our contracts and projects fit into two categories: fixed-price and hourly billing. For the Fixed Price contracts, here’s what our pricing process would look like. We would sit down with the client, discuss the project, sometimes breakdown the project into milestones, and agree upon a feature set. We would then take that feature set and estimate how many hours it would take, multiply those hours by our standard rate, and draw up an contract based on that number.

Because we had seen other companies do this, we thought this was the way to go. But after a few projects that went way over budget and after reading Thoughtbot’s Open Source Playbook - we had the confidence to say “This is not working out.”. (in fact, Thoughtbot taught me the phrase “Fixed-Price Bid”).

The problem with software development is that it’s usually best developed in an agile manner. That is, with shifting priorities, MVP philosophies, and incremental changes based on continuous feedback. When you put all those principles in place, software development is generally very hard to predict and map out exactly over a long period of time.

So if you come up with a contract for a 2-3 month project, there is an incredibly high probability that things are going to come up that you didn’t envision. In fact, if you’re a good agile developer, you welcome the unforseen feedback that causes you to shift course.

With that in mind, we abandoned fixed price contracts for all of our custom software projects and moved purely to a hourly based billing system.

Making our clients feel comfortable

So going to a hourly based billing system was great for us. Not only were we not going over budget, but we didn’t stress anymore about beating the clock. We did our work at a comfortable pace which in a lot ways allowed us to do better work.

So that’s all well and good for us, but what about the client? When a client approaches you about creating something for them, it’s hard for them to hear “We’re just going to build it, and charge you at the end of the month.”

With fixed-price contracts, they knew exactly how much they were going to spend. With hourly based billing, they were left in the dark and just had to trust us.

So to combat this, we married our hourly-based billing with a couple of other concepts:

  • Rough Estimate: While we don’t do fixed price contracts, and we don’t stick to a specific number, based on what the customer is asking, we do give a rough estimate. For example, we’ll tell them that what they are initially asking for will take roughly 30 hours and multiply that by our hourly rate.
  • Re-emphasize that the rough estimate is purely an estimate: Despite giving them an estimate, we still emphasize that software is unpredictable and that a lot of unforeseen development may come up.
  • Transparency along the way: If money is a big concern, we let our clients know along the way how many hours we’re spending. We informally agree that if we’re coming close to the rough estimate, we can sit down and chat to make some decisions.

With these three things in place, it helps the client agree to work with us.

The last thing though, is that there has to be a trust between us and the client. In our first meetings, based on our interactions, based on our previous work, based on our reputation, there grows a trust that we will do our best work and not overcharge you. Once that trust is established - both parties stop watching the clock and dollars and start focusing on the project.

This trust between client and vendor is something that should exist with any company you choose. Thoughtbot has a great article on this here.

So, how do we price?

With all that in mind, our pricing becomes pretty easy. For starter projects, we figure out a rough scope of features, we do a rough estimate of hours, and give the rough number to the client understanding that it can fluctuate.

After the project has some legs, usually it moves to on-going work. Our clients ask us for work to be done, we do the work, and bill at the end of the month for our time. If the client is concerned about cost at that point, we do an informal estimate of hours as well.

Stop skirting - what are the actual numbers?

Like I said, vendors vary heavily in cost, and even in our lifetime, BiteSite has changed its rates several times and will probably change them again in the future. WIth that said, as of the time of this writing, our standard rate for software development is $150.00 CAD + HST per hour. That is a standard hourly rate that covers all of our services. We’re a small team so our staff are jack-of-all-trades that cover everything from Product Management, to Design, to Coding, to Deployment - basically everything you need to get a software project off the ground.

When we start a project with a client, we usually like to scale down their big vision to an Minimum Viable Product or MVP. Because software can be very unpredictable to develop, it’s good to come up with a small product that can generate valuable feedback that we can build upon.

Our best MVPs have been in the neighbourhood of 30-50 hours of work. So a good starter project would cost anywhere between $4500.00 CAD and $7500.00 CAD + HST.

So as a rough start, we usually tell our clients that $5000.00 + HST is a good starting budget.

So what can a $5000.00 MVP get me?

So what does 30-50 hours of work look like? At BiteSite, we focus on Web Applications so let’s talk about those. Client features have a big range, but here's an example project to give you an idea:

  • A web application
    • that allows my staff to login and manage their account and profile
    • that automatically calculates total vacation days an employee is entitled to based on their start date
    • that allows my staff to request their vacation days
    • that allows supervisors to approve vacation days
    • that allows supervisors to customize how many vacation days each employee is entitled to
    • that e-mails supervisors anytime someone has logged a vacation day
    • that e-mails staff anytime their vacation is approved
    • that summarizes total vacation days in a report

This is a very high-level description, but it’s a good example of a good MVP. If a client came to me with that description for a project, I would say that’s a great starter project that would probably cost around $4000.00 to $5000.00 + HST.

On the subject of the MVP, we try as much as possible to get our clients projects down to something in the range of 30 - 50 hours because we feel it’s a good spot when it comes to foreseeable development. Anything past that, we feel it’s better to develop something small now and see what happens later rather than plan out every single detail.

What happens after the MVP?

After an MVP is launched, it’s really anyone’s guess how much more you will spend. Depending on many factors including how much you're dedicated to the project, how much the software gets used, and how "on the mark" the original features were, your software could demand a lot more future work or very little.

If the software is very successful and you want to keep adding more and more to it, it’ll cost you more. Chances are though, it will also help you more and potentially generate revenue for you. On the other end of the spectrum, you may find that the MVP is perfectly fine and just needs a few tweaks every now and then.

We’ve had projects that have become very successful and demand full time work where over hundreds of thousands of dollars of development are spent every year, and we’ve had small projects that cost under $100.00/year of maintenance. After your MVP though, you start to get an idea of how much effort it takes to add to your product and if you don't - that should always be an open conversation with your vendor.

So when considering custom software, it’s a good idea to think about your initial MVP cost, and then the potential to fund it afterwards. While our clients may have way bigger budgets, we still encourage them to start with the MVP and go from there.

The fine print

This article is called ‘What does custom software cost?’, and while the majority of your cost will be spent on labour, I would be remiss to leave out the extra costs that a client is typically responsible for. When it comes to developing software, there are usually a lot of services involved that you’ll pay for. For example, if you want to develop an iOS app for the iPhone, you’ll have to pay $99/year to have it on the App Store. If you want to develop a web application, you’ll have to pay for the domain and hosting costs. So when considering your budget, don’t forget to discuss with your vendor any extra costs on top of the service labour they’re providing.


Like I said, I can only speak to our own company which is basically a $150.00/hr rate. Other companies will have lower rates and others will have higher. Some will do fixed price - but the fixed price probably factors in some hourly estimate of the project.

So if the price can vary so much - what’s the point of this article? First of all, I wanted to educate the market on where the price is coming from. Second of all, we at BiteSite want to be transparent about our pricing. It helps with our own projects and helps push others to be transparent.

Not to mention, I hate when I look around the web and can’t get a single answer to a question I have. If your question is “How much does custom software cost?”, well now you have a starting point.

Casey Li
CEO & Founder, BiteSite

Thank you, Tim Clark

film company business

When I started BiteSite, I do what I always do when I start any project: I start to have grand visions of what the future would look like. I pictured a huge office with awesome furniture and gear, lots of employees, filming on big locations, and coding some of the best apps in the world. Now while some aspects of that have come true, we’re still a small startup with lots to achieve.

While we have hit some milestones we set out to hit, we still haven’t achieved others. For example, I pictured by now that we’d be in our own office or that we’d be shooting on RED cameras. But as time goes by, I start to realize more and more what’s truly important and I’ve learnt that the most important milestone we’ve hit is building the team and the most important resource we have is our people. I know it’s a cliche, but it’s completely true.

I’ve come to the realization that if we had to cut back on our rent or our film gear or the computers we use or the services we pay for - we’d figure it out. But if I lost my team and it was back to just me - that would be a lot harder to swallow. That’s why I feel it very important to talk about the amazing people that make up BiteSite.

Usually articles, blog posts, and news items center around people starting at BiteSite or moving on to their next opportunity which causes a lack of celebration for the people who do an amazing job day in and day out and stay with the team. So with that, today, I’d like to celebrate Tim Clark.

The other day I opened up LinkedIn on my phone and this came up

3 years at BiteSite makes Tim one of our longest-running employees and is definitely the most senior employee that’s still with us today.

Tim started out when I needed a motion graphic designer. I realized that my motion graphic skills were quite limited and I needed someone who could pull off some of the requests we were getting. So I put out a job posting for a motion graphic designer who could also be an all around filmmaker. I had a couple of great applicants, but Tim last minute submitted a video application. I was already impressed the effort he put in, but then was even happier to see he had the skills we needed. Today, Tim is a all around filmmaker handling everything from location scouts, to shooting, to lighting, to sound mixing, to editing, to colour grading, to motion graphics, and more.

I hired Tim based on our initial interviews because of his attitude and skill set. But it’s what I have observed over the past 3 years that really makes Tim stand out above the rest.

In corporate video, or any video work for that matter, your work can be incredibly unpredictable. You may show up to a shoot that you planned for 3 hours and end up staying for 6. You may show up to a shoot at 9am and not leave until midnight. Sometimes clients change their minds on the day, sometimes logistics change, and sometimes you just want to get that shot that’ll blow everyone away. Not to mention the physicality involved in filming. Whatever the reason, it can be a very tough job. While we pride ourselves on process and mitigating these issues, you just can’t control everything.

When I was a one-man team, I could always take care of myself, but when working with others - you never know how it’s going to go. But Tim does his job every day without a single complaint. He understands what it takes to make great productions, he understands the unpredictability of it all, and he understands what it means to go above and beyond for the customer to give them something they’ll truly be happy with. He comes in with a positive attitude, never complains, and always gives it his best.

This may sound like a lot of employees or co-workers you know, but I tell you - when you’re factoring in the physicality of some of these long shoot days - it’s not always easy to keep that attitude up. But Tim does it every day he comes to work.

There are a lot of other amazing things I can say about Tim. His technical skills, when it comes to shooting, editing, grading, animating, and more, are incredible and his willingness to constantly improve always impresses me. But it is his attitude in the face of a tough job that has really made him stand out and what I appreciate most. He is a team player in every sense of the word and delivers truly amazing work.

Since hiring Tim, our video productions have been on a steady incline in scale and quality. When I look at the productions we do today compared to when we started - I’m incredibly proud of what we’ve accomplished. None of that would be possible without the skills, effort, and above all else, attitude that Tim Clark brings to BIteSite. Thank you, Tim, for being a part of our team.

Casey Li
CEO & Founder, BiteSite

Serving Videos to Authenticated Users using Amazon AWS and Ruby on Rails

amazon aws video coding software ruby on rails

Update: The original posting of this article left out two key points that have to do with serving your videos over SSL and allowing your second Cloudfront instance to access your S3 MP4 bucket. Scroll down to the section on 'Secondary Cloudfront instance' for the details.

So we recently got a project to do something we have never done before: create a web application that would only serve videos to authenticated users.

It's a pretty common use case we run into a lot: if you're logged in as someone who is authorized to view the videos, you can click on a page and watch that video. If you're not, then you can't view that page, NOR can you copy the link to that video to watch that video. Now as common as this is to use, it isn't super common to develop. At least not for us.

Yes, you can use streaming services like Vimeo etc., but what if your client wants their own custom solution. This is the challenge we were faced with.

And it turns out, there were really two things we had to solve:

  • Serving streaming video on-demand
  • Securing the streaming video to only authenticated users

(Just a small note, usually video-streaming breaks down into two categories: live or on-demand. Live streaming is when you are shooting video and streaming it to users' devices at the same time. On-demand is like traditional YouTube, Vimeo, or Netflix - where your users' are watching pre-recorded video. For our project, we were implementing on-demand video).

Ruby on Rails is our development web framework of choice, and AWS S3 is our asset storage of choice. So we knew the solution involved some combination of those two.

Now, the thing is - a good chunk of this is documented around the web, but I didn't come across a post that showed the end-to-end solution for Rails. So here you go.

Amazon Web Services

If you don't already know, Amazon has this entire part of its business called Amazon Web Services separate from its consumer facing online shopping platform. Amazon Web Services is a collection of services that Amazon provides to developers to help them develop applications. They have everything from virtual servers, to databases, to media encoding, to storage systems. Each one of these has their own name. For example, the storage solution is called "S3". You can do some more reading on Amazon Web Services on their official home page.

AWS Answers

When I started this project, I knew nothing about streaming video to devices. I originally thought the easiest thing to do would be to upload a video to Amazon S3, and just have a link to that video in the HTML code. As I started to research hosting video on AWS though, it turns out that is not a good solution. With that solution, you force every user to download the entire video, scrubbing back and forth is not ideal, and it's not true streaming in the sense that it's not downloading small packets of the video.

The preferred solution involves encoding your video into streamable chunks and serving those to the customer and only if their browser doesn't support streaming do you serve them the entire file. This solution also helps with dynamically changing the quality depending on the user's connection speed. So the first thing to consider is encoding your video into multiple formats that browsers support to optimize the video viewing experience.

On top of that, Amazon recommends that you consider using their CDN service, Cloudfront, to serve your assets to your users. What a CDN does is effectively copy your resources to multiple servers so that when a browser accesses your resource, it grabs it from a server that geographically closer to it. This ensure fast response and load balancing between all your different users. So the second thing to consider is setting up Cloudfront to serve your videos.

To do all this, there are actually a lot of moving parts and a lot of complexity involved. The great thing is, Amazon actually does supply all the services needed to execute this, but the question is how do you set it all up?

Well that's where AWS Answers comes in.

AWS Answers is a collection of solutions to common problems. So let's say you wanted to build a "Internet of Things" solution. AWS Answers has a solution for that to setup everything you need to get up and running. Let's say you wanted to create a backend server for a mobile app - well there is an AWS answer for that as well.

AWS Answers comes with documentation such as guides and FAQs about how to setup everything you need. For example, it may tell you, "You should set up a AWS Dynamo DB and a AWS S3 bucket...". But the coolest thing about AWS Answers, is the solutions also come with automatic deployment scripts. This means that you can click a button, fill out a couple of fields, and then boom - AWS automatically sets up everything you need. It's pretty amazing.

And guess what? There is an AWS Answer for On-demand Video Streaming.

Video On-Demand on AWS

So this article won't explain all the details of the "Video On-Demand on AWS" AWS Answer, but I will break down the basics of all the moving parts. When you deploy this AWS Answer, here are some of the major parts that get setup for you:

  • S3 Buckets (both for the original video files, and the transcoded files)
  • Dynamo DB (a database to keep track of your video files)
  • MediaConvert (to trancode your actual videos)
  • CloudFront (to serve your files to your users)

The AWS Answer actually sets up Lambda functions and Step functions as well, but I want to concentrate on the major parts in this article. You can read about everything else on the AWS Answer Page.

The basic workflow is this.

  1. You upload a video file into one of the S3 buckets that the AWS Answer set up for you (the source bucket).
  2. The bucket is setup to automatically run a transcode job on any video files in that bucket.
  3. The transcode job starts to transcode your video file into appropriate streaming formats.
  4. The transcode job drops its completed files (transcoded video files and thumbnails) into another S3 bucket (the destination bucket) that the AWS Answer set up for you.
  5. The newly transcoded video files are now available to the Cloudfront instance that the AWS Answer set up for you.
  6. You put the Cloudfront URL to your video into your code.

That is the basic setup. So once the AWS Answer is setup, you literally just drop files into the source S3 bucket, AWS does the rest, and provides you with a URL for your video that you can put into your code.

All is good? We're done right?

Not quite.

Tweaks to "Video On-Demand on AWS"

So the Amazon AWS answer is great, but it's not exactly perfect for everybody, and it definitely was not perfect for us. As we went down the road of putting the Cloudfront links on our code, we ran into a lot of issues and it turns out the solution was to tweak some of the services and configuration that the AWS Answer setup for us.

Here are the two major changes we made.

H.264 Encoding

So by default, the AWS Answer setup up a couple of encodings for the videos. When you drop a video into the source S3 bucket, AWS transcodes your file into multiple formats. The first set of these formats are all streaming formats, and then it also transcodes your video into a "single-file" format for browsers that don't support the streaming formats.

For streaming, the AWS Answer sets up encodings for HLS and DASH of various resolutions. For the "single-file" format, the AWS Answers sets up encodings for H.265 HEVC of various resolutions. If you're curious, you can actually go into your "Media Convert" page, and click on "Output Presets" to see this list:

Keep in mind, that if browsers support the streaming formats, they don't care about the single-file format. It's only the browsers that don't support the streaming formats that care about the single-file format.

The streaming formats are actually great and work with browsers like Safari. The problem is with the "single-file" format. Most browsers that don't support streaming formats, like Chrome, don't support HEVC H.265 either. So our backup single-file format wouldn't work.

So the first change we made to the default solution was change the MP4 output presets.

We changed the Video codec to "MPEG-4 AVC (H.264)", left everything else as default, and filled in the bitrate to be the same as before. "8500000" for 1080p and "6500000" for 720p. We also updated the name of the preset output so that it said "AVC" instead of "HEVC".

Now files that were dropped into the source S3 bucket would get converted to the HLS and DASH streaming formats as well as a H.264 single-file format.

Secondary Cloudfront Instance

Out of the box, the AWS Answer sets up one Cloudfront instance to point to the S3 destination bucket. To be more precise, it's setup to point to the S3 bucket used for the streaming video files output from the transcoding job. Your single-file H.264 files actually get put into an entirely different S3 bucket.

Since we wanted to serve both, we had to set up one more Cloudfront instance that pointed to the S3 MP4 bucket.

So our set up in the end

  1. Cloudfront Instance 1 pointed to S3 Bucket for Streaming Files (HLS, DASH). The name of this bucket has "abrdestination" in its name, ABR standing for "adaptive bitrate".
  2. Cloudfront Instance 2 pointed to S3 Bucket for Single-file Files (H.264). The name of this bucket has "mp4destination" in its name.

Update: Setting up Access from your Secondary Cloudfront Instance

When the AWS Answer setup your 'mp4destination' bucket, it by default blocks access. So you need to allow access from your Cloudfront Instance 2 to your 'mp4desination' bucket. By default, the AWS Answer already sets this up between Cloudfront Instance 1 and the 'abrdestination' bucket. To set up this access, we actually need what's called a 'Origin access identity'. Luckily, we can just use the one that was already set up between the Cloudfront Instane 1 and the 'abrdestination'. If you log into AWS, go to your Cloudfront console, you'll see on the left 'Origin Access Identity'. If you click on it, you'll see the 'VOD on AWS' user that was set up to allow access between the Cloudfront Instance 1 and the 'abrdestination'. Again, we are going to re-use this for Cloudfront Instance 2.

To do this, click on your Cloudfront Instance 2 and then click on the 'Origin' tab. Check the mp4 origin, and then click the 'Edit' button. Under 'Origin Access Identity', choose 'Use Existing' and select the 'VOD on AWS' user. You'll also want to select 'Yes, Update Bucket Policy' for 'Grant read permissions on Bucket'. Click 'Yes, Edit'. Saving changes like this usually takes a while to deploy so monitor the main Cloudfront console to see when the changes have taken effect.

You can also check out the Bucket Policy of the mp4 bucket to make sure that Amazon correctly added permissions for the 'VOD on AWS' account.

Alright, now your Cloudfront Instance 2 has proper permissions to the mp4 bucket.

Update: Setting up SSL for your Cloudfront instance

If you plan to serve your videos over SSL, you can provision a SSL certificate for your Cloudfront instance. To do this, log into AWS and go to the 'Certificate Manager' console. From there, you can request a SSL certificate. For our example, we would be requesting two SSL certificates, one each for


The one catch is you'll have to either have admin e-mail access for your domain or have access to the DNS records for your domain. After you verify ownership of your domain, the certificate will be issued. At that point, you can go back to Cloudfront, click on your cloudfront instance, click 'edit', and select the newly issued SSL certificate.

Putting your videos into Code

With all that set up properly, you are now ready to put your videos into your code. Specifically, you'll be putting them into some HTML5 video tags.

There are a couple of ways to get your URLs. You can log into AWS and go to your Dynamo DB. From there, you can browse your items and you'll see your HLS URL.

But in general, you can also browse your S3 destination buckets, and you'll end up with URLs similar to this:


Note that the .m3u8 and .mp4 files are served on different Cloudfront instances, so the subdomain will be different. Also notice that for the .mp4 file, you'll have to choose either the 1080p or 720p file to serve up.

Once you have those URLs, you can put them in your HTML:

<video width="100%" controls> <source src="https://<cloudfront-id-for-instance-1><id-of-job-in-s3>/hls/<video-file-name>.m3u8" /> <source src="https://<cloudfront-id-for-instance-2><id-of-job-in-s3>/mp4/<video-file-name>_720p.mp4" /> Your browser does not support HTML5 video. </video>

And with that, you have solved the first part of the problem: serving on-demand streaming video to your users.

Now, the question is, how do you protect it to only authenticated users.

Blocking Public Access

The first part is quite simple. You want to start off by blocking public access to your URLs. The AWS Answer by default makes your S3 buckets private, so you should be ok on that front. Users will not be able to directly paste a S3 URL in to their browser and watch a video.

However, the Cloudfront instances are setup with access to the S3 buckets, and they in turn serve up the files publicly. So while users can't access your S3 bucket files publicly, they can certainly access the resources through Cloudfront.

So our first step, is to update Cloudfront's behvaiour.

  1. Log in to AWS, go to Cloudfront, and take a look at your instances.
  2. Click on your first instance.
  3. Click on the Behaviors tab.
  4. You should see a row for the Default(*) path pattern. Check it and then click "Edit" above.
  5. Set "Restrict Viewer Access" to "Yes"
  6. Click "Yes, Edit"
  7. Repeat for your second Cloudfront instance.

Now, one thing to note. These changes don't take effect immediately. They take some time. Back on your Cloudfront main page where you see the listings of instances, you'll see a status column. If you've just made these changes, the status will probably be "In progress". You'll have to wait until this says "Deployed" before any of this works.

Once it's deployed, try pasting one of your video URLs into your browser. You should see something like this:

That's a good thing! Now users can't just copy and paste your URL and share it with other users.

You've successfully blocked public access. So how do you give access to authenticated users now?

Domain setup

So for the next part, you will unfortunately have to have access to your domain registrar or DNS servers. If you don't know that is, you'll basically need access to point your domains to certain servers. It's important that the website you are serving your videos on has the same domain as the cloudfront servers for this to all work.

This is how I have it setup:

DomainPoints to
www.bitesite.caMain website
video-stream.bitesite.caCloudfront Instance 1
video-file.bitesite.caCloudfront Instance 2

For Cloudfront, you'll have to setup CNAME records and you'll have to log into AWS, and configure your Cloudfront instances' alternate domain names. You can do this by clicking on the instance, and on the general tab, click "Edit". Again, you'll have to wait until the status is "Deployed" before all this starts working.

The domain setup here is absolutely crucial as we'll be using cookies. Cookies do heavily depend on the domains that you are visiting.

Obviously you'll adapt this for your own domain.

Local Testing

In the next step, we'll start talking about cookies. For those who don't know, cookies are basically an collection of key-value pairs that generally get sent with every request to the same host/domain. The interesting thing about cookies, is they are typically set by server code. A typical flow would be like this:

  1. A browser makes a request for a web page.
  2. The server receives the request, and sets a cookie.
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value.
  5. The browser sends the cookie data on every subsequent request to the server.
  6. The server uses the cookie data.

Now the interesting thing with cookies, is they are limited by domain. What's even more interesting, is that cookies can be set up to apply to any subdomain within the master domain. So if set up properly, the browser will not only send a cookie on every request back to the same server, the browser will also send that cookie to any other server that has the same domain. So it would look something like this:

  1. A browser requests a page from
  2. The server receives the request, and sets a cookie for the master domain (
  3. The response is sent back to the browser along with the "set cookie" instruction.
  4. The browser sets the cookie value for (
  5. The browser sends the cookie data on every subsequence request to any server ending with (which includes,, etc.)
  6. Any of those servers can use the cookie data.

You can see where this is going.

So this is all well and good if you're hosting your code on a server with the proper domains setup, but what about when you're still developing and you want to test on localhost? Well localhost is its own domain? So if you're testing with something like, how are you going to get your localhost to set a cookie for

Ideally, you would want something like to point to your localhost.

Well turns out there are a lot of different ways to do this, but the quickest way is to edit your "hosts" file.

Warning: Editing your hosts file will alter the way your system works when it resolves a URL. So be very careful when editing this and when you're done testing, maybe revert it.

Because I typically don't want to mess with real websites in my browser, and because it doesn't really matter what subdomain I use, rather than pointing to my localhost, I chose to point to my localhost.

On a mac, you'll open up /etc/hosts and add this line to it:

With that line in place, now when I accessed, it would hit my localhost and with any cookies I set, I had access to the domain.

Since port 80 usually is privileged, you can run on port 80 by doing something like this:

sudo rails s -p 80

If you're using RVM like me, you'll have to do something like this:

rvmsudo rails s -p 80

(one catch to this is, it is now running under the 'root' user, so make sure your database can accept connections from 'root'. I had to add this as another user to my PostgreSQL database).

So you should now be able to fire up your Rails server and access it at "".

Alright, we're all set to move on.

AWS Signed Cookies

So when restrict access to S3 or Cloudfront files, Amazon provides you with two mechanisms to give temporary access to those files:

  • Signed URLs
  • Signed Cookies

Signed URLs can actually be applied to both S3 URLs and Cloudfront URLs, but for our example, we're only dealing with Cloudfront URLs. A Signed URL is basically a URL that you can provide the user that gives them temporary access to a resource. The way it works, is you write server side code to generate a URL that contains query string parameters specifying how long that URL is valid. When that URL hits the Amazon AWS servers, the Amazon servers check the URL's parameters, to see if the URL is valid. It checks if it's expired, and also checks a signature to ensure that the URL was created by a authorized party. The way this usually works, is your write server side code that has access to AWS private keys to create these special signed URLs. Amazon even provides libraries for Ruby to do this.

Signed Cookies are very similar (and as far as I know only apply to Cloudfront URLs). The idea of a signed cookie, is you create a cookie that contains a policy. That policy will specify what types of files the cookie applies to. Then when browser requests a URL from the Amazon servers, the Amazon servers will look at the cookie that comes along with the request (remember, cookies are sent automatically with every request to the same master domain), and take a look at the policy. If the policy allows the URL that the browser is requesting, then Amazon will send back the resource successfully. For security reasons, the Amazon servers will also check that the cookie was created by an authorized party. In this case, this usually works by having your server code create a cookie using Cloudfront private keys. Again, Amazon provides Ruby libraries to do this.

The big advantage with SIgned Cookies, is you specify a policy that can encompass more than 1 file. So it's an easy way to give access to an entire set of files. This is particularly important when it comes to streaming files because when you stream a video, you're actually requesting access to multiple files (10 second chunks for example). So rather than creating a Signed URL for every one of those chunks, you can create a cookie that grants access to all those files.

So for this solution, we'll set up signed cookies for users that are authenticated. But to create these Signed Cookies, our server side code has to be authorized to do so. How do we authorize our server code to create cookies? We use Cloudfront private keys.

Cloudfront Key Pairs

If anybody could randomly create a signed cookie, it wouldn't really be protected. In fact, the "signed" part is what makes is protected. Only people authorized to create cookies can create the cookies that will work when the Amazon servers do their check. To make your Rails code authorized, they will need access to Cloudfront keys. To do this:

  1. Log in to Amazon AWS
  2. Click on your username in the upper-right and select "My Security Credentials".
  3. Ignore the warning about IAM by clicking "Continue to Security Credentials" as Cloudfront keys only work at the User Account level.
  4. Expand the "Cloudfront key pairs" section.
  5. Click on "Create New Key Pair".
  6. The pair will be created and present you with options.
  7. Download the PRIVATE key file.
  8. Then click "close".
  9. You'll be brought back to your list of Keys. You should also see the "ACCESS KEY ID". Keep this window open as you'll need that value.

We then put the private key file into our source, but be warned that this file should not be accessible to the public. So if you're hosting your source code in a public repository, you'll want to find somewhere else to put this file. Because our source code is private, we put the private key in /railsapproot/cloudfront.

Creating a signed cookie in Rails

Ok, so we have our private key and access key ID ready to use so we can properly create signed cookies. Let's put these to use.


First, grab the 'aws-sdk' gem. I used version 3 of the SDK. In your Gemfile:

gem 'aws-sdk', '~> 3'


Second, let's set up a global Cookie signer to use in our app. Create an initializer config/initializers/aws.rb and put this code in it:


You'll fill in your key_pair_id with the ACCESS KEY ID from the previous step. For the private_key_path, type the path to where you saved the private key file. The Access Key Id might work better as an environment variable as well. So you might have something more like:


before_action to create the cookie

So, the next question is, when do you want to actually create the cookie? My first approach was that after the user signed in, right there and then create the cookie. That seemed smart. The thing is, if they signed in and left their browser for a long time, the cookie might expire and then they'd have to sign out and sign back in. You could manage this by signing them out, but I decided that was too complicated for my use case. You can definitely do it that way, but here's what I decided to do.

I decided to write a before_action for all actions that checks if the user is signed in. Then, if the user is signed in, I set the cookie. This way, every request they perform while they're signed in just ends up renewing the cookie. The only catch to this is ensuring you clean up the cookie when they sign out.

So here's what my application controller looked like:

class ApplicationController < ActionController::Base before_action :set_cloudfront_signed_cookie ... private def set_cloudfront_signed_cookie if user_signed_in? cookies_values = CF_COOKIE_SIGNER.signed_cookie("", policy: policy) cookies_values.each do |k, v| cookies[k] = { value: v, expires: 10.minutes.from_now, domain: :all } end end end ... end

So this runs before every action. If the user is signed in, we create a signed cookie using the CF_COOKIE_SIGNED from the AWS SDK. That will spit back a hash of values that we have to write to the clients cookies. For each cookie value, we set it to expire after 10 minutes, and we also specify the very important domain: :all. What that argument does is sets the cookie for "" rather than "". Once you do that, those cookie values will also be sent with requests made to "" and "".

Let's take a closer look at the initial call to CF_COOKIE_SIGNER.signed_cookie.

First of all, you'll see I've passed in a to the method. This is not for demonstration purposes, and not a mistake. This is literally the code I use and I'll tell you why. If you're passing in a custom made policy to this method, the URL parameter of this method doesn't matter at all. So I purposely put "" to let other developers know that this has nothing to do with this all working.

Now, what I just said is that that URL is ignored if you pass in a custom policy. So that's what the second argument is policy. Let's take a look at that method below which is also in the application controller as a private method:

class ApplicationController < ActionController::Base ... private def policy resource = "http*://video**" expiry = 10.minutes.from_now { "Statement" => { { "Resource" => resource, "Condition" => { "DateLessThan" => { "AWS:EpochTime" => expiry.utc.to_i } } } } }.to_json.gsub(/\s+/, '') end end

This is the policy that is included in the Cookie that the Amazon servers will check when the browser makes a request. The expiry specifies how long the cookie is valid for. Remember, we call this every time a signed in user takes action, so this will get renewed every time they browse to a page. What more important here is the way that the resource string is constructed. Amazon allows you to put wildcards in the resource URL. This is the key to the policy working for multiple files (and multiple servers for that matter).

Let's break down the three wildcards. First you have

This is optional, but basically allows secure and non-secure requests. That is, it will allow the browser to request "http://" and "https://".

Secondly, we have the host:

What's nice about this, is this will allow the cookie to work for both our streaming Cloudfront instance and our single-file Cloudfront instance. That is, it will work for both "" and "".

And lastly we have the path:

That allows the cookie to apply to basically any file hosted on those servers.

Alright that's it. Put that into your code, sign in and browse to a page. Your cookies should now be set. It's really easy to see these in Chrome. Just open your inspector tools and go to the Application tab. Open up your cookies and you should see cookies for your domain.

You'll see that the domain on the Cloudfront cookies are "".

The HTML Code

With your cookies in place ready to be sent with your video-stream and video-file requests, you're ready to cap it all off. Code a page, and put this in:

  <source src="" />
  <source src="" />
  Your browser does not support HTML5 video.

Feel free to add thumbnails generated by AWS and controls:

<video width="100%" controls poster=""> <source src="" /> <source src="" /> Your browser does not support HTML5 video. </video>

And that's pretty much it! Everything should be working.

If you want to make sure it's secure. Grab the mp4 URL, log out of your app, wait 10 minutes for the cookie to expire, and then paste that URL into your browser, you should get an error.

Finishing it all off, cleaning up your cookies

Now, because of my decision to renew the cookie on every request, it's a good idea to kill the cookie right after a user logs out. So wherever you log out, for me I use Devise and override the SessionsController#destroy and do this:

class SessionsController < Devise::SessionsController def destroy clear_cloudfront_cookies super end private def clear_cloudfront_cookies cookies.delete("CloudFront-Key-Pair-Id", domain: :all) cookies.delete("CloudFront-Policy", domain: :all) cookies.delete("CloudFront-Signature", domain: :all) end end

It's VERY important that you specify domain: :all, because that's how you set up the cookies. Otherwise, it won't delete properly.


With that, you now have a great video solution! Congrats. This took me 3-4 solid days of debugging to get through so hopefully this helps us some peeps. The great thing about the AWS Answer, is that it's nice infrastructure for uploading, transcoding and serving the files. So you can in the future build an interface for users to upload files. Once they're uploaded to the S3 source bucket, they will automatically get transcoded and then you can inspect the Dynamo DB programmatically to serve them up.

Our project didn't require that level of sophistication, but it's good to know we have it in our back pocket if we need it. With Cloudfront and streaming files, you know you're serving your users fast and with minimal data to view the video.

Always room for improvement, so be sure to let us know if you have anything to add to this. (at the time or writing this Blog we don't have comments implemented, but they will be coming soon).

Thanks for reading.

Casey Li
CEO & Founder, BiteSite

What is a MVP or Minimum Viable Product?

process methodology software

When it comes to building software, whether you’re building it yourself or hiring a custom software services company to help you out, an increasingly important topic to understand is the Minimum Viable Product or MVP for short.

Before we dive into it, first you should know that MVP has a few names. In fact the first time I came across it, I was actually introduced to it as MMP or Minimal Marketable Product. I read about it in an amazing book called Agile Product Management with Scrum.

Whatever it’s called, MVP embodies a very important philosophy when it comes to software development and can actually be applied to other fields.

In this article, we’ll dissect MVP and explain how it’s one of the best things you can do when starting a new product, a new feature, or new anything :)


Let’s start with some background on software development. Back in the day, software used to be developed in what was called a Waterfall model which was most likely adopted from other engineering disciplines. From a high level, it would look like this:

  • Domain Analysis
  • Requirements Gathering
  • Design
  • Implementation
  • Testing
  • Delivery

If you were building a software product, you might spend a month or so analyzing your domain - in other words, understanding the world that your users inhabit and the lives they live. Then you’d move onto requirements gathering where you would meticulously define every aspect of your application and list out every feature that your users wanted or would want. You would spend a good amount of effort fleshing out these requirements and getting every detail right. Next up would be design. Based on requirements, you would start to layout the user interface and the software architecture to build your application. After your team approved all that, you would move onto coding the application, testing it and finally delivering it to your users.

From start to finish, you would probably be spending anywhere between 6 months and a few years before you actually presented your users with a usable product.

If you happen to get all the steps right leading up to the release, you’d be a genius. However, the reality is it’s incredibly hard and rare to get every step right without getting proper, genuine feedback. This is the problem with Waterfall.

You may ask your potential users for feedback along the way, you may show them wireframes, you may show them mockups, but until you put an actual usable piece of software in their hands, at no fault of their own, they won’t give you genuine feedback.

How many times has someone looked at a proof of concept and said “Wow, that’s great - let’s move forward!”. Then, when the real product is put in their hands, they figure out all these issues with it.

Therein lies the problem.

It’s hard for anybody to truly evaluate a product based on documentation, meetings, discussions, wireframes, or designs.

With Waterfall, what you’re left with is months, if not years of assumptions and predictions about how someone will feel about a product rather than genuine, reactive feedback.

It’s the genuine, reactive feedback that you want so that you know you are building a product or set of features that users will genuinely want and need. You want to get that as soon as possible and avoid long stints of assumptions and predictions that cost you both time and money.

The big advantage of software

As mentioned, Waterfall was adopted by the software industry most likely because it’s what other disciplines used. However, there is a big advantage that software has over things like civil or mechanical engineering.

Let’s use bridge building as an example. If you’re building a bridge, you should properly do a lot of upfront work, analysis, calculations, and small tests before you put your first user on it. When you open up a bridge for use - you get very little chance for error and it costs a lot of time and effort to redo anything.

This is not the case for most software.

For most software products, companies are given the chance to easily update and continually improve their product. With modern technologies like the internet, it’s incredibly easy for a software company to improve their software by deploying updates over time.

Basically, software companies are given many chances and opportunities to change their product based on user feedback.

This distinct advantage combined with the importance of genuine, reactive feedback gave rise to iterative development.

Iterative Development

Iterative development is a very simple concept. Rather than taking a product from start to finish and then leaving it alone, you build the product, get feedback for your users, and do it all over again.

However, a company does not have infinite resources, so something has to give. What changes is the amount of effort and time spent in each iteration.

In Waterfall, you might spend 2 years doing domain analysis, requirements gathering, design, and implementation before you push out a product to your users.

With iterative development, you do some version of that but on a way smaller scale. You do some basic research, some basic design, and push out the product in a matter of a few weeks if not less. If you can’t sacrifice the level at which you do those activities, then you reduce the scope of what you’re implementing.

Either way, the idea is you do smaller chunks of development and repeat. There are many philosophies and processes that help you execute small iterations of development like Scrum, Extreme Programming, and Agile Methodologies.

MVP or Minimum Viable Product

So now, you understand the reasoning behind iterative development. It’s a push to get genuine feedback as fast as you can so that you can iterate, and improve your product. So how does a Minimum Viable Product fit in?

An MVP is the product of your first iteration of development.

Remember, your goal is get user feedback as soon as possible. So an MVP is the smallest, simplest, most barebone version of your product you can come up with that will get you the feedback you need to proceed.

Let’s break that down a bit.

“Smallest, simplest, more barebone...”.

The idea here is that you want to reduce the time and effort as much as possible. The reality is until you get user feedback, everything you assume could be wrong. So spending more and more time in the assumptive phase can really hurt your success. You want to reduce that time so you can test your assumptions and ideas as fast as you can. So the philosophy here is how small can you make it.

“...that will get you the feedback you need...”.

This combats the idea of making things small. If your MVP is too small, too bare, it could be so unusable that you get no feedback at all. If there is zero attraction to even try out the feature or product then you’re going to get nothing back.

“ proceed.”

This last thought is really your ultimately goal. An MVP is all about getting to the next step. If you execute an MVP correctly, you’ll have enough information to help you make big decisions to move forward.

Deciding on what makes up your MVP is really an art in trying to balance reduced time and effort and creating something that people will actually use.

Applying MVP at all levels

MVP was originally used when talking about releasing the first version of your product. Since then, however, its principles have been applied at all levels.

For example, let’s say you’re creating a new feature. Rather than building the best, highest gloss version of that feature - you might scope it down and release the MVP version of that feature to get feedback on where to go next.

It’s become so common at our company that we start to use it as a verb. “Let’s MVP that feature!”

Later in this article, we’ll see how MVP can be applied to more than just software development.

How do you go about defining your MVP?

There are definitely a lot of strategies when it comes to defining an MVP but it doesn’t have to be a complicated process. In fact, at BiteSite, when we talk to our clients about “MVP’ing” their product or a new feature, it’s a pretty simple conversation.

When it comes to most people and how they think about their product or a new feature, nine times out of ten, they are already thinking bigger than an MVP. It’s just the nature of what happens when you think about your next great idea and get excited about what it could be.

So the process is very simple: go through every aspect of your product or feature and ask yourself “Do we really need this right now?”

Play your own devil’s advocate and you’ll be surprised how much you can cut out.

In fact, you may even be noticing these days that brand new products are missing some features that you would have thought would be no brainers. That’s probably a company putting out an MVP and seeing what how the masses react. Remember the first iPhone and it missing copy and paste?

When you develop a product, you’ll most likely have an endless backlog of features. MVP’s help get your the feedback you need to prioritize what’s important now versus what can be delayed until a later date.

Don't make your MVP too minimal

Something that I’ve recently learned that I’m guilty of is the practice of making your MVP too minimal. This will usually stem from the developer side rather than the product management side.

When I started implementing the MVP philosophy to our products and features, I made the mistake of always cutting out the same things. One thing, for example, I cut out all the time was UI design. I would always just say, let’s just get the basic data entry working with a basic UI first and see how users respond to it. The problem was, if the interface was really bad, users wouldn’t use it at all.

Remember, the goal is to get feedback from users.

Part of the delicate dance of figuring out your MVP is figuring out what it is that users will care about and what they won’t care about. So make educated guesses and don’t strip out too much.

Defining your MVP doesn’t always mean cutting out functionality

A lot of times when we talk to clients about MVP, it’s not always about cutting functionality out for the user - but rather replacing it with a non-software solution.

For example, you may say “I want my users to be able to reset their own passwords.” A developer comes back to you and says, “Well. Right now, I can manually reset the password, but we’d have to build out an interface for the user to reset it themselves.”

Depending on the product, a good MVP solution to this might be to keep this feature out, and for now just have the developer reset the password manually and have a human being e-mail that user manually.

In the beginning when you’re dealing with your first set of users, this might be a great way to start. You may find out after you release it that for the first year - no one has requested to reset their password. On the flipside, you might find out in the first week that everybody wants to reset their password and as a result you prioritize this feature.

That’s what MVP is all about. Start off with a small scope and don’t implement feature until you have good evidence that it’s needed.

You don’t have to get it right

The idea of an MVP is very consistent with a lot different processes, philosophies, and methodologies. You’ll read this a lot:

“It’s not about getting it right. It’s about moving forward.”

That’s the crux of it all. If you’re having a lot of trouble figuring out if your MVP is too small, or too big, or if you’re making the right assumptions or not - don’t worry about it too much. MVP is all about picking a path, implementing it, and testing it with real users. You’re bound to get some stuff wrong. But knowing you’re wrong based on feedback is way better than assuming you’re right or wrong.

Yes, it’s good to have informed discussions, it’s good to have opinions, and it’s good to make educated assumptions - but don’t dwell on this too long. Just move forward and get the feedback.

The Sprint Book by Jake Knapp really solidifies this concept and even assigns a “moderator” to the process to keep this in check. In fact, that book and its process in general is an amazing embodiment of MVP.

It’s all about the Feedback

By now you’ve gathered that MVP is all about feedback. The one thing to keep in mind, though, is that feedback can come in many different forms and they all have their place. The following is just a short list of different types of feedback you can aim for when finally releasing your MVP:

  • Direct Contact
    • After you’ve deployed your MVP, you can directly contact your users and ask them what they liked and what they didn’t like. This obviously is only suited for smaller number of users, but can be very helpful for companies starting out. One thing to keep in mind is to analyze the feedback rather than just follow it. Just because one person says they didn’t like your sign-in screen doesn’t mean everyone hates it.
  • Surveys
    • You could proactively reach out to some users and send a survey. There is a whole art to designing surveys, but even simple surveys can get you some great information.
  • Data and Usage Analytics
    • Data analytics is a great way to get unfiltered, honest, reactive feedback. When you talk to customers in person, they may hold back their true sentiments. Data and usage analytics let their actions do the talking. Take a look at how many people are actually using your feature or product and how they are using it. Tools like Google Analytics, New Relic, Skylight, and Mixpanel can help you with this.

MVP is just a start

The last thing I’ll mention about MVP is to remember that it’s a philosophy on how to start implementing your product or feature.

When I say scale back your product idea or vision - that’s only to implement your MVP and get going. By no means do I mean throw out your big ideas or vision. Keep those in your back pocket and let your MVP inform you as to whether or not you’re on the right track.

How can you apply this

You’ve learned a lot about what an MVP is. So how can you start applying what you’ve learned. Well, it depends on what you’re working on.

Are you thinking of building a new piece of software?

If you are, chances are you’ve already had a big vision in place. Keep that vision in your back pocket and start to think about your MVP. Put all your ideas on the wall, and start crossing off the ones you really don’t need to get going.

Our favourite clients at BiteSite are the ones who have thought about their MVP. They come in with a big inspiring vision, and then shortly after say “...but as a start, here’s what I envision.”

Personally, I love when I can envision a product that I can build in a couple of days that will instantly bring value.

Are you an established company with an established product?

If so, chances are you are considering developing new features. You can apply the MVP principles to your features. Scale them down to a small enough level that you can quickly implement them, deliver them, and get some informative feedback.

MVP for everything

What’s interesting is as I go further and further into my professional career, I find the principles behind MVP can be applied to a whole lot. The idea of implementing small and quick deliverables and then getting feedback is finding its way into a lot.

Currently, I plan on applying it to our sales process and identifying target markets and in the past I’ve applied it to small changes in our company. We would try small versions of our changes and if they failed, we’d scrap them. If they succeeded, we’d iterate and build on them.

MVP is an incredibly powerful idea that was introduced to me through software, but I’m finding more and more that it can be applied to almost everything.

Casey Li
CEO & Founder, BiteSite

What exactly is custom software?

software business

The world is full of software. Just take a moment to look around you and you’re probably surrounded with many examples. Whether it be the browser that you’re reading this in, the software you use at work, or the software that’s running inside your car, software is everywhere these days.

Most of the time, we are interacting with software built for the masses. We’ve got Instagram, Whatsapp, and Facebook for our social lives, we have Word, Excel, and Acrobat at work, and we have so much software behind so many devices we use everyday.

While these applications are incredible and improve our lives in so many ways, there are times where they aren’t quite what we’re looking for. Especially when it comes to work or running a business, sometimes the applications out there fall short in one way or another.

So what do you do?

You really have three options

  • Put up with existing software and live with its shortcomings
  • Wait for updates or a brand new application to come out that hits the mark
  • Build something yourself

Now when it comes to building it yourself, most of us don’t have the luxury of knowing how to program a piece of software.

That’s where custom software comes in.

Custom software is software that is built specifically for you, your business, your needs, and your wants rather than software that is built for the masses. It’s like the difference between getting a custom, bespoke suit made for you versus buying one off the shelf.

Typically, because a business doesn’t have the technical knowhow to build custom software themselves, they hire another company to build them a piece of software to solve a problem they are having. The company they hire is a custom software shop and the software they build is custom software.

The many forms of custom software

Custom software comes in many forms and is sold by many different types of companies. When it comes to the software itself, custom software shops analyze the problem and decide what’s the best technology to use. They may recommend a web application, a mobile application, a desktop application - or even recommend that custom software is not the way to go at all. While custom software is typically built from scratch, sometimes custom software solutions involve integrating existing applications.

If this is getting a little confusing, let’s look at an example of a good candidate for custom software. Let’s say you run a plumbing company. Today, you get appointments by having people call a phone number. The appointment leads to a service call that you fill out on paper and give to one of your plumbers. They complete the job, fill out the paper service call, come back to the office and file the final report.

You might think to yourself that it would be great if a lot of this was digitized and automated. You go to a custom software company, educate them about your workflow and they build you a custom mobile application for your plumbers who can receive service orders on their phones, fill out the reports on their phone, and have the data automatically sync to a back-end office application that you can view. They may even build you a web application that allows your customers to book online. That would be a great candidate for custom software.

Who sells custom software

When it comes to the companies that offer custom software services - there are a whole bunch and they call themselves all sorts of names. To make things more complicated, sometimes companies that are focused on other offerings may offer custom software solutions as a side service. For example, even though Marketing agencies are focused on marketing activities, they may still offer custom software services since that may play into their strategy. Below is a small list of types of companies that may offer custom software services:

  • Custom Software Shop
  • Custom Software Services Company
  • Software Firm
  • Software Consulting Agency
  • Software Consultant
  • Software Freelancer
  • Digital Agency
  • Marketing Agency
  • Web Design and Development Shop
  • Mobile Development Shop
  • Software Solution Firm

All these types of companies and more offer custom software.

When it comes to the service of custom software, typically companies do a lot. Among other things, they will analyze your problem, make recommendations, strategize with you, implement a robust development process, design the UX and UI, implement the code, and deliver the final product. In most cases, they will also maintain the software. By the end, if you’ve dealt with a good custom software shop, you’ll have a good sense of what it’s like to run your own software company.

Some great companies local to Ottawa that do custom software include Industrial, Netfore and BitHeads (Not to mention BiteSite :)). Outside of Ottawa, there are amazing companies like Thoughtbot and TWG.

So what?

Now that you know what custom software is - what is the big deal? We’ll be writing more and more articles on this subject, but custom software is all about solving a problem. By solving that problem, your company may get the edge on a competitor, your company may run more smoothly and efficiently, or your company just may experience more joy at work. Whatever it may be, it’s all about identifying problems that can be solved with software.

So are you a business owner? Spend 10 minutes thinking about your business and the challenges you face. Ask yourself could something be done better? Could you picture yourself using software to solve it?

If so, custom software might be the answer.

Casey Li
CEO & Founder, BiteSite