Github: Your Single Point Of Failure

« »

To say that I love Github would be a bit of an understatement. I more than recommend it when describing code review processes. At Mozilla, the web development team uses Github for our code reviews, since line notes and pull requests work perfectly with our code review requirements. Github allows a large distributed team to work independently while still working together.

However, recently Github has experienced some issues with it’s performance. Thankfully, most of these issues have been minor. But the issues highlight a serious potential flaw in using Github for critical development processes:

Github is a single point of failure.

How Github and Git work together

The way Git works allows every developer to work independently and possess a complete copy of the repository at all times. So there’s little to no risk of data loss beyond what’s on a particular programmer’s computer that hasn’t been shared with others. This is different from a centralized version control system like Subversion.

Github doesn’t change this model in any way, other than offering a centralized canonical repository for project commits. Each fork is essentially an independent repository that exists for that particular developer, and their clone of that repository locally is another copy as well. The developer pushes their commits and Github’s pull request process essentially attempts to merge one group of commits into another; this allows developers to share their work upstream with one another.

The potential problem for developers and companies

By design, each developer has a full copy of the repository. This means that there’s little risk of data loss for a company that uses a service like Github. Where the problem arises is in the fact that Github has been worked into the core of a number of companies’ development processes.

For example, companies that have developers work locally and push their code to Github for deploys have no control over the infrastructure on which their code resides. If they wish to do a deploy, but Github is down, they are unable to deploy their code.

Github also appears to share infrastructure between public and private repositories, making their paid clients as susceptible to downtime as their free users. This means Github is essentially charging companies and developers for not publishing their private code to the rest of the world, but not offering any kind of SLA for uptime.

There are enterprise options with Github, to allow companies to host Github on their own servers. These options are not available for every organization, only those with a decent IT budget. Github offers no packages that come with a known SLA.

So what is the solution?

Unfortunately, I don’t have an easy solution to this problem. Github is best-in-breed and is far superior to other tools like Google Code. The Object Oriented PHP Masterclass will rely heavily upon Github’s tools to aid students in correcting their code, and I doubt I’ll see my team change from Github to something else in the near future.

I would love to see the open source community come up with a reasonable replacement for Github, though the power of Github is in the fact that money changes hands and that is a terribly powerful motivator for creating a beautiful product.

I believe Github will continue to grow, that their stability issues will eventually settle, and that things will improve. But I still feel uncomfortable with my (and everyone else’s) single point of failure.

Update: There is a possible GitHub replacement, known as Gitlab. Gitlab is open source, released under the MIT license, and (ironically) hosted on Github. Notably, Gitlab is written on Ruby on Rails; you should understand the security implications of Ruby on Rails before using Gitlab.

Brandon Savage is the author of Mastering Object Oriented PHP and Practical Design Patterns in PHP

Posted on 2/10/2013 at 5:52 pm
Categories: Business Management, Open Source

EJ wrote at 2/10/2013 8:15 pm:

What is stopping you from also syncing your repository to another provider (or software you host internally) that hosts Git repo’s? If Git is down, push and deploy from that until it recovers.

Any tool will fail, so you have to have redundancy if you can’t tolerate that.

Sherif wrote at 2/11/2013 1:03 am:

You’ve also got Atlassian Stash as an option.

Roderik van der Veer (@r0derik) wrote at 2/11/2013 1:58 am:

I believe there is an even greater risk with GitHub. Your own project, due to the distributed nature of GIT, has a lot of alternatives if GitHub is down. You have the full repo on your computer, on the server, maybe in a GitLab or BitBucket, you can even put it in Dropbox.

Now, dependencies, those are hard. Composer, etc, a lot of times depend on github for their files (we use composer, the others like rubygems, npm i’m not so sure about) and not only the deps of your projects, but also the deps of your deps.

We are looking, but have not found a workable solution yet. The only option I see at this time is using a bash script and cron job to mirror all the repo’s we depend on to our GitLab instance, and then using Satis (private packagist repo) to override the download locations.

But then, what makes us so sure that our own infrastructure is that much more solid than GitHub’s (especially if you don’t want a full time engineer making sure this keeps working).

Roberts wrote at 2/11/2013 2:23 am:

There is answer to Your prayers – RhodeCode. Private and public repos, GIT and Mercurial support, pull requests. Check it out.

mof wrote at 2/11/2013 3:54 am:

Exactly. Git is git. Make copies of all github repos to build/deploy. Relying on 3rd parties with a high level of complexity (ie, github!) means there will be failures, and if you’re not ok with that, mirroring and fail over must be present.

Git makes this very easy. In fact, for open source, I see very little gain in owning a github appliances. That’s good for proprietary stuff _or_ when your data has to stay private to the company (ie not in the cloud).

The only parts of github you can’t fall back from are bugs, comments, etc. which should be non-deploy-critical anyways.

Note that it’s the same for any “cloud” service. They’re good, but they’ve similar limits.

Speekenbrink (@fruitl00p) wrote at 2/11/2013 4:17 am:

To reitterate EJ: we’ve been doing just that since the major fallout of Github back in 2012: our build server mirrors after a succesfull build to Bitbucket. (aside from the local repo copy we keep internally) All our production servers have 3 remotes configured: Github (remote_0), Bitbucket (remote_1) and our interal SSH-able server (remote_999) If our deployment scripts can’t connect to remote_X they’ll connect to remote_X+1. If there is now remote_x+1 well fall back to our internal server. (which should never happen)

This way we can easily add new ‘remotes’ without loosing too much sleep over it :) Works like a charm! (and indeed looses the SPOF)

martin wrote at 2/11/2013 4:27 am:

there is also bitbucket. it doesn’t fail as often as github and is oriented more towards the private companies

Jonathan (@jonathans_blog) wrote at 2/11/2013 5:36 am:

there is nothing stopping a company setting up a small linux machine and running git on that and using that as their ‘canonical’ git repo…

except that you have to get a spare pc and put linux on it and put it in a cupboard and make sure its patched.

Petr wrote at 2/13/2013 6:32 am:

another alternative to GitHub and Gitlab is Gitorious gitorious.org/gitorious

Matthew Weier O'Phinney (@mwop) wrote at 2/26/2013 12:51 pm:

While I understand the premise, it misses the mark. Git is categorized as a *distributed* version control system (DVCS). Distributed means that any given repository can be marked as canonical at any time.

By convention, though, you *will* have a canonical repository somewhere. This is so developers know where to look for changes, and where to post changes against. While git makes this process quite a bit easier than subversion or other centralized version control systems, it still takes effort; you need to setup ACLs, potentially daemons, etc. GitHub takes the effort out of that.

That does not make GitHub a SPOF necessarily, however. As noted, because any fork/checkout of a git repository contains all changes up to the last time it was synced with the canonical repository, any given checkout can be promoted to canonical at any given time. Usually, there are any number of forks that are up-to-date — if nothing else, the last person to push to GitHub will have a full changeset.

With regards to hooks and whatnot, again, GitHub simplifies the process — but if GitHub dies, most, if not all, of those hooks can be recreated somehow. Were you using Travis-CI? you can re-create your setup as scripts for Jenkins. How about pushing to IRC? there are standard hooks you can drop into a bare repository for that. GitHub is convenience only; anything it offers can be recreated elsewhere — even the issues and wiki system (though to do that, you’ll likely need to have some systems in place ahead of time to mirror the content).

It is precisely the fact that git is distributed, and the fact that GitHub has implemented an API for everything it does, that have made GitHub successful: you can move from it at any time, but the service makes so many things convenient, and the downtime is so little for most people, that sticking with it makes as much sense as any other solution.

My suggestions:

* Don’t use the green “merge” button; always push from a local repository. This ensures somebody always has the most recent changesets.
* Write scripts to backup your wiki and issues.
* Have a list of the hooks you use, in case you ever need to recreate them later.
* Keep the scripts and your hooks list in your repository.

In this way, your project stays fully distributed, so that you never have a SPOF; you can always re-launch from elsewhere.

Hikari wrote at 4/9/2013 8:07 pm:

Well, WordPress.com also hosts free and premium services on the same datacenter. Google Apps too. We’re living in the cloud era, we shouldn’t have to worry where our services are being hosted.

I myself don’t like Git. I’d rather pay for a Subversion + Redmine service.

Regardless of the software, if we want top availability and reliability, we must pay for it. There’s no other way. Softlayer for example has 3 sites in USA linked by dedicated optical cables. If worthy, it’s just a matter of paying for 2 servers to replicate the service, or buy a backup service hosted in another continent.

« »

Copyright © 2023 by Brandon Savage. All rights reserved.