alex gaynor's blago-blog

Best of PyCon 2014

Posted April 17th, 2014. Tagged with python, community.

This year was my 7th PyCon, I've been to every one since 2008. The most consistent trend in my attendance has been that over the years, I've gone to fewer and fewer talks, and spent more and more time volunteering. As a result, I can't tell you what the best talks to watch are (though I recommend watching absolutely anything that sounds interesting online). Nonetheless, I wanted to write down the two defining events at PyCon for me.

The first is the swag bag stuffing. This event occurs every year on the Thursday before the conference. Dozens of companies provide swag for PyCon to distribute to our attendees, and we need to get it into over 2,000 bags. This is one of the things that defines the Python community for me. By all rights, this should be terribly boring and monotonous work, but PyCon has turned it into an incredibly fun, and social event. Starting at 11AM, half a dozen of us unpacked box after box from our sponsors, and set the area up. At 3PM, over one hundred volunteers showed up to help us operate the human assembly line, and in less than two and a half hours, we'd filled the bags.

The second event I wanted to highlight was an open space session, on Composition. For over two hours, a few dozen people discussed the problems with inheritance, the need for explicit interface definition, what the most idiomatic ways to use decorators are, and other big picture software engineering topics. We talked about design mistakes we'd all made in our past, and discussed refactoring strategies to improve code.

These events are what make PyCon special for me: community, and technical excellence, in one place.

PS: You should totally watch my two talks. One is about pickle and the other is about performance.

You can find the rest here. There are view comments.

House and Twitter

Posted March 20th, 2014. Tagged with meta.

When I was younger, I started watching the TV show House M.D., and I really liked it. At some point my mom asked me if I was more sarcastic since I started watching the show. I said of course not, I've always been extremely sarcastic.

I was wrong. Watching House made being sarcastic cool.

Using Twitter makes being snarky and not putting thought into things cool. So I'm quitting Twitter. I'm already snarky and not-thoughtful enough, I don't need something to incentivize it for me.

I'll miss Twitter. Strange as it is to say, I've made many friends via Twitter, I've exposed myself to new perspectives, and I've laughed until it hurt. It's not worth it though.

If you still want to chat with me, or, for some unknown reason, hear what I have to say, you can join ##alex_gaynor on freenode, follow this blog, or email me at alex.gaynor@gmail.com.

You can find the rest here. There are view comments.

Why Crypto

Posted February 12th, 2014. Tagged with python, open-source.

People who follow me on twitter or github have probably noticed over the past six months or so: I've been talking about, and working on, cryptography a lot. Before this I had basically zero crypto experience. Not a lot of programmers know about cryptography, and many of us (myself included) are frankly a bit scared of it. So how did this happen?

At first it was simple: PyCrypto (probably the most used cryptographic library for Python) didn't work on PyPy, and I needed to perform some simple cryptographic operations on PyPy. Someone else had already started work on a cffi based cryptography library, so I started trying to help out. Unfortunately the maintainer had to stop working on it. At about the same time several other people (some with much more cryptography experience than I) expressed interest in the idea of a new cryptography library for Python, so we got started on it.

It's worth noting that at the same time this was happening, Edward Snowden's disclosures about the NSA's activities were also coming out. While this never directly motivated me to work on cryptography, I also don't think it's a coincidence.

Since then I've been in something of a frenzy, reading and learning everything I can about cryptography. And while originally my motivation was "a thing that works on PyPy", I've now grown considerably more bold:

Programmers are used to being able to pick up domain knowledge as we go. When I worked on a golf website, I learned about how people organized golf outings, when I worked at rdio I learned about music licensing, etc. Programmers will apply their trade to many different domains, so we're used to learning about these different domains with a combination of Google, asking folks for help, and looking at the result of our code and seeing if it looks right.

Unfortunately, this methodology leads us astray: Google for many cryptographic problems leaves you with a pile of wrong answers, very few of us have friends who are cryptography experts to ask for help, and one can't just look at the result of a cryptographic operation and see if it's secure. Security is a property much more subtle than we usually have to deal with:

>>> encrypt(b"a secret message")
b'n frperg zrffntr'

Is the encrypt operation secure? Who knows!

Correctness in this case is dictated by analyzing the algorithms at play, not by looking at the result. And most of us aren't trained by this. In fact we've been actively encouraged not to know how. Programmers are regularly told "don't do your own crypto" and "if you want to do any crypto, talk to a real cryptographer". This culture of ignorance about cryptography hasn't resulted in us all contacting cryptographers, it's resulted in us doing bad crypto:

Usually when we design APIs, our goal is to make it easy to do something. Cryptographic APIs seem to have been designed on the same principle. Unfortunately that something is almost never secure. In fact, with many libraries, the path of least resistance leads you to doing something that is extremely wrong.

So we set out to design a better library, with the following principles:

  • It should never be easier to do the wrong thing than it is to do the right thing.
  • You shouldn't need to be a cryptography expert to use it, our documentation should equip you to make the right decisions.
  • Things which are dangerous should be obviously dangerous, not subtly dangerous.
  • Put our users' safety and security above all else.

I'm very proud of our work so far. You can find our documentation online. We're not done. We have many more types of cryptographic operations left to expose, and more recipes left to write. But the work we've done so far has stayed true to our principles. Please let us know if our documentation ever fails to make something accessible to you.

You can find the rest here. There are view comments.

Why Travis CI is great for the Python community

Posted January 6th, 2014. Tagged with python, open-source.

In the unlikely event you're both reading my blog, and have not heard of Travis CI, it's a CI service which specifically targets open source projects. It integrates nicely with Github, and is generally a pleasure to work with.

I think it's particularly valuable for the Python community, because it makes it easy to test against a variety of Pythons, which maybe you don't have at your fingertips on your own machine, such as Python 3 or PyPy (Editor's note: Why aren't you using PyPy for all the things?).

Travis makes this drop dead simple, in your .travis.yml simply write:

language: python
python:
    - "2.6"
    - "2.7"
    - "3.2"
    - "3.3"
    - "pypy"

And you'll be whisked away into a land of magical cross-Python testing. Or, if like me you're a fan of tox, you can easily run with that:

python: 2.7
env:
    - TOX_ENV=py26
    - TOX_ENV=py27
    - TOX_ENV=py32
    - TOX_ENV=py33
    - TOX_ENV=pypy
    - TOX_ENV=docs
    - TOX_ENV=pep8

script:
    - tox -e $TOX_ENV

This approach makes it easy to include things like linting or checking your docs as well.

Travis is also pretty great because it offers you a workflow. I'm a big fan of code review, and the combination of Travis and Github's pull requests are awesome. For basically every project I work on now, I work in this fashion:

  • Create a branch, write some code, push.
  • Send a pull request.
  • Iteratively code review
  • Check for travis results
  • Merge

And it's fantastic.

Lastly, and perhaps most importantly, Travis CI consistently gets better, without me doing anything.

You can find the rest here. There are view comments.

PyPI Download Statistics

Posted January 3rd, 2014. Tagged with python.

For the past few weeks, I've been spending a bunch of time on a side project, which is to get better insight into who uses packages from PyPI. I don't mean what people, I mean what systems: how many users are on Windows, how many still use Python 2.5, do people install with pip or easy_install, questions like these; which come up all the time for open source projects.

Unfortunately until now there's been basically no way to get this data. So I sat down to solve this, and to do that I went straight to the source. PyPI! Downloads of packages are probably our best source of information about users of packages. So I set up a simple system: process log lines from the web server, parse any information I could out of the logs (user agents have tons of great stuff), and then insert it into a simple PostgreSQL database.

We don't yet have the system in production, but I've started playing with sample datasets, here's my current one:

pypi=> select count(*), min(download_time), max(download_time) from downloads;
  count  |         min         |         max
---------+---------------------+---------------------
 1981765 | 2014-01-02 14:46:42 | 2014-01-03 17:40:04
(1 row)

All of the downloads over the course of about 27 hours. There's a few caveats to the data: it only covers PyPI, packages installed with things like apt-get on Ubuntu/Debian aren't counted. Things like CI servers which frequently install the same package can "inflate" the download count, this isn't a way of directly measuring users. As with all data, knowing how to interpret it and ask good questions is at least as important as having the data.

Eventually I'm looking forwards to making this dataset available to the community; both as a way to ask one off queries ("What version of Python do people install my package with?") and as a whole dataset for running large analysis on ("How long does it take after a release before a new version of Django has widespread uptake?").

Here's a sample query:

pypi=> SELECT
pypi->     substring(python_version from 0 for 4),
pypi->     to_char(100 * COUNT(*)::numeric / (SELECT COUNT(*) FROM downloads), 'FM999.990') || '%' as percent_of_total_downloads
pypi-> FROM downloads
pypi-> GROUP BY
pypi->     substring(python_VERSION from 0 for 4)
pypi-> ORDER BY
pypi->     count(*) DESC;
 substring | percent_of_total_downloads
-----------+----------------------------
 2.7       | 75.533%
 2.6       | 15.960%
           | 5.840%
 3.3       | 2.079%
 3.2       | .350%
 2.5       | .115%
 1.1       | .054%
 2.4       | .052%
 3.4       | .016%
 3.1       | .001%
 2.1       | .000%
 2.0       | .000%
(12 rows)

Here's the schema to give you a sense of what data we have:

                                   Table "public.downloads"
          Column          |            Type             |              Modifiers
--------------------------+-----------------------------+-------------------------------------
 id                       | uuid                        | not null default uuid_generate_v4()
 package_name             | text                        | not null
 package_version          | text                        |
 distribution_type        | distribution_type           |
 python_type              | python_type                 |
 python_release           | text                        |
 python_version           | text                        |
 installer_type           | installer_type              |
 installer_version        | text                        |
 operating_system         | text                        |
 operating_system_version | text                        |
 download_time            | timestamp without time zone | not null
 raw_user_agent           | text                        |

Let your imagination run wild with the questions you can answer now that we have data!

You can find the rest here. There are view comments.

About Python 3

Posted December 30th, 2013. Tagged with python.

Python community, friends, fellow developers, we need to talk. On December 3rd, 2008 Python 3.0 was first released. At the time it was widely said that Python 3 adoption was going to be a long process, it was referred to as a five year process. We've just passed the five year mark.

At the time of Python 3's release, and for years afterwards I was very excited about it, evangelizing it, porting my projects to it, for the past year or two every new projects I've started has had Python 3 support from the get go.

Over the past six months or so, I've been reconsidering this position, and excitement has given way to despair.

For the first few years of the Python 3 migration, the common wisdom was that a few open source projects would need to migrate, and then the flood gates would open. In the Django world, that meant we needed a WSGI specification, we needed database drivers to migrate, and then we could migrate, and then our users could migrate.

By now, all that has happened, Django (and much of the app ecosystem) now supports Python 3, NumPy and the scientific ecosystem supports Python 3, several new releases of Python itself have been released, and users still aren't using it.

Looking at download statistics for the Python Package Index, we can see that Python 3 represents under 2% of package downloads. Worse still, almost no code is written for Python 3. As I said all of my new code supports Python 3, but I run it locally with Python 2, I test it locally with Python 2; Travis CI runs it under Python 3 for me; certainly none of my code is Python 3 only. At companies with large Python code bases I talk to no one is writing Python 3 code, and basically none of them are thinking about migrating their codebases to Python 3.

Since the time of the Python 3.1 it's been regularly said that the new features and additions the standard library would act as carrots to motivate people to upgrade. Don't get me wrong, Python 3.3 has some really cool stuff in it. But 99% of everybody can't actually use it, so when we tell them "that's better in Python 3", we're really telling them "Fuck You", because nothing is getting fixed for them.

Beyond all of this, it has a nasty pernicious effect on the development of Python itself: it means there's no feedback cycle. The fact that Python 3 is being used exclusively by very early adopters means that what little feedback happens on new features comes from users who may not be totally representative of the broader community. And as we get farther and farther in the 3.X series it gets worse and worse. Now we're building features on top of other features and at no level have they been subjected to actual wide usage.

Why aren't people using Python 3?

First, I think it's because of a lack of urgency. Many years ago, before I knew how to program, the decision to have Python 3 releases live in parallel to Python 2 releases was made. In retrospect this was a mistake, it resulted in a complete lack of urgency for the community to move, and the lack of urgency has given way to lethargy.

Second, I think there's been little uptake because Python 3 is fundamentally unexciting. It doesn't have the super big ticket items people want, such as removal of the GIL or better performance (for which many are using PyPy). Instead it has many new libraries (whose need is largely filled by pip install), and small cleanups which many experienced Python developers just avoid by habit at this point. Certainly nothing that would make one stop their development for any length of time to upgrade, not when Python 2 seems like it's going to be here for a while.

So where does this leave us?

Not a happy place. First and foremost I think a lot of us need to be more realistic about the state of Python 3. Particularly the fact that for the last few years, for the average developer Python, the language, has not gotten better.

The divergent paths of Python 2 and Python 3 have been bad for our community. We need to bring them back together.

Here's an idea: let's release a Python 2.8 which backports every new feature from Python 3. It will also deprecate anything which can't be changed in a backwards compatible fashion, for example str + unicode will emit a warning, as will any file which doesn't have from __future__ import unicode_literals. Users need to be able to follow a continuous process for their upgrades, Python 3 broke it, let's fix it.

That's my only idea. We need more ideas. We need to bridge this gap, because with every Python 3 release, it grows wider.

Thanks to Maciej Fijalkowski and several others for their reviews, it goes without saying that all remaining errors are my own.

You can find the rest here. There are view comments.

Gender neutral language - An FAQ

Posted November 30th, 2013. Tagged with ethics, diversity, open-source, community.

I'd like to refer to a hypothetical person in my documentation

Try something like this:

When a user visits the website, they will be assigned a session ID, and it will be transmitted to them in the HTTP response and stored in their browser.

But not like this!

When a user visits the website, he will be assigned a session ID, and it will be transmitted to him in the HTTP response and stored in his browser.

Why?

Using gendered pronouns signals to the audience your assumptions about who they are, and very often lets them know that they don't belong. Since that's not your intent, better to just be gender neutral.

And if you don't believe me, some folks did some science (other studies have consistently reproduced this result).

Can I just go 50/50 on male and female pronouns?

It's a nice idea, unfortunately it doesn't work. Your users don't read your documentation cover to cover, so they won't be able to see your good intentions. Instead they'll be linked somewhere in the middle, see your gendered language, and feel excluded.

In addition, not everyone identifies by male or female pronouns. Play it safe, just be gender neutral.

Using the plural pronouns isn't grammatically correct!

I've been assured by people far more knowlegable than I that it's ok, even Shakespeare did it. Personally, I'm comforted by the knowledge that even if I'm wrong about the grammar, I won't have made anyone feel excluded.

Someone sent a pull request to my project changing the languages!

So merge it! If you've got some process that a contributors needs to go through (such as a CLA), let them know. They're just trying to make your community better and bigger!

They said I was being hostile!

I'm sorry, but you were. Your choice of language has an impact on people.

I wasn't trying to be!

That's ok, hostility isn't about intent, your words had an impact whether you meant it or not.

Maybe you didn't know, you're not a native English speaker, your 11th grade English teacher beat you over the head with some bad advice. That's ok, it only takes a moment to fix it, and then you're letting everyone know it's easy to fix!

Aren't there bigger issues we should be dealing with?

There are so many giant issues we face. This one takes 15 seconds to fix, has no downsides, and we can all be a part of making it better. If we can't do this, how could we ever tackle the other challenges?

Has anyone ever asked these questions?

You have no idea.

Some of these aren't questions!

That's ok.

You can find the rest here. There are view comments.

Affirmative action

Posted November 27th, 2013. Tagged with community, ethics, diversity.

Whenever the topic of affirmative action comes up, you can be sure someone will ask the question: "How would you feel if you found out that you got your job, or got into college, because of your race?"

It's funny, no one ever asks: "How would you feel if you got your job, or got into college, because you were systemically advantaged from the moment you were born?"

Interesting.

You can find the rest here. There are view comments.

Security process for Open Source Projects

Posted October 19th, 2013. Tagged with django, python, open-source, community.

This post is intended to describe how open source projects should handle security vulnerabilities. This process is largely inspired by my involvement in the Django project, whose process is in turn largely drawn from the PostgreSQL project's process. For every recommendation I make I'll try to explain why I've made it, and how it serves to protect you and your users. This is largely tailored at large, high impact, projects, but you should able to apply it to any of your projects.

Why do you care?

Security vulnerabilities put your users, and often, in turn, their users at risk. As an author and distributor of software, you have a responsibility to your users to handle security releases in a way most likely to help them avoid being exploited.

Finding out you have a vulnerability

The first thing you need to do is make sure people can report security issues to you in a responsible way. This starts with having a page in your documentation (or on your website) which clearly describes an email address people can report security issues to. It should also include a PGP key fingerprint which reporters can use to encrypt their reports (this ensures that if the email goes to the wrong recipient, that they will be unable to read it).

You also need to describe what happens when someone emails that address. It should look something like this:

  1. You will respond promptly to any reports to that address, this means within 48 hours. This response should confirm that you received the issue, and ideally whether you've been able to verify the issue or more information is needed.
  2. Assuming you're able to reproduce the issue, now you need to figure out the fix. This is the part with a computer and programming.
  3. You should keep in regular contact with the reporter to update them on the status of the issue if it's taking time to resolve for any reason.
  4. Now you need to inform the reporter of your fix and the timeline (more on this later).

Timeline of events

From the moment you get the initial report, you're on the clock. Your goal is to have a new release issued within 2-weeks of getting the report email. Absolutely nothing that occurs until the final step is public. Here are the things that need to happen:

  1. Develop the fix and let the reporter know.
  2. You need to obtain a CVE (Common Vulnerabilities and Exposures) number. This is a standardized number which identifies vulnerabilities in packages. There's a section below on how this works.
  3. If you have downstream packagers (such as Linux distributions) you need to reach out to their security contact and let them know about the issue, all the major distros have contact processes for this. (Usually you want to give them a week of lead time).
  4. If you have large, high visibility, users you probably want a process for pre-notifying them. I'm not going to go into this, but you can read about how Django handles this in our documentation.
  5. You issue a release, and publicize the heck out of it.

Obtaining a CVE

In short, follow these instructions from Red Hat.

What goes in the release announcement

Your release announcement needs to have several things:

  1. A precise and complete description of the issue.
  2. The CVE number
  3. Actual releases using whatever channel is appropriate for your project (e.g. PyPI, RubyGems, CPAN, etc.)
  4. Raw patches against all support releases (these are in addition to the release, some of your users will have modified the software, and they need to be able to apply the patches easily too).
  5. Credit to the reporter who discovered the issue.

Why complete disclosure?

I've recommended that you completely disclose what the issue was. Why is that? A lot of people's first instinct is to want to keep that information secret, to give your users time to upgrade before the bad guys figure it out and start exploiting it.

Unfortunately it doesn't work like that in the real world. In practice, not disclosing gives more power to attackers and hurts your users. Dedicated attackers will look at your release and the diff and figure out what the exploit is, but your average users won't be able to. Even embedding the fix into a larger release with many other things doesn't mask this information.

In the case of yesterday's Node.JS release, which did not practice complete disclosure, and did put the fix in a larger patch, this did not prevent interested individuals from finding out the attack, it took me about five minutes to do so, and any serious individual could have done it much faster.

The first step for users in responding to a security release in something they use is to assess exposure and impact. Exposure means "Am I affected and how?", impact means "What is the result of being affected?". Denying users a complete description of the issue strips them of the ability to answer these questions.

What happens if there's a zero-day?

A zero-day is when an exploit is publicly available before a project has any chance to reply to it. Sometimes this happens maliciously (e.g. a black-hat starts using the exploit against your users) and sometimes it is accidentally (e.g. a user reports a security issue to your mailing list, instead of the security contact). Either way, when this happens, everything goes to hell in a handbasket.

When a zero-day happens basically everything happens in 16x fast-forward. You need to immediately begin preparing a patch and issuing a release. You should be aiming to issue a release on the same day as the issue is made public.

Unfortunately there's no secret to managing zero-days. They're quite simply a race between people who might exploit the issue, and you to issue a release and inform your users.

Conclusion

Your responsibility as a package author or maintainer is to protect your users. The name of the game is keeping your users informed and able to judge their own security, and making sure they have that information before the bad guys do.

You can find the rest here. There are view comments.

Meritocracy

Posted October 12th, 2013. Tagged with politics, community, ethics, django, open-source.

Let's start with a definition, a meritocracy is a group where leadership or authority is derived from merit (merit being skills or ability), and particularly objective merit. I think adding the word objective is important, but not often explicitly stated.

A lot of people like to say open source is a meritocracy, the people who are the top of projects are there because they have the most merit. I'd like to examine this idea. What if I told you the United States Congress was a meritocracy? You might say "gee, how could that be, they're really terrible at their jobs, the government isn't even operational!?!". To which I might respond "that's evidence that they aren't good at their jobs, it doesn't prove that they aren't the best of the available candidates". You'd probably tell me that "surely someone, somewhere, is better qualified to do their jobs", and I'd say "we have an open, democratic process, if there was someone better, they'd run for office and get elected".

Did you see what I did there? It was subtle, a lot of people miss it. I begged the question. Begging the question is the act of responding to a hypothesis with a conclusion that's premised on exactly the question the hypothesis asks.

So what if you told me that Open Source was meritocracy? Projects gain recognition because they're the best, people become maintainers of libraries because they're the best.

And those of us involved in open source love this explanation, why wouldn't we? This explanation says that the reason I'm a core developer of Django and PyPy because I'm so gosh-darned awesome. And who doesn't like to think they're awesome? And if I can have a philosophy that leads to myself being awesome, all the better!

Unfortunately, it's not a valid conclusion. The problem with stating that a group is meritocratic is that it's not a falsifiable hypothesis.

We don't have a definition of objective merit. As a result of which there's no piece of evidence that I can show you to prove that a group isn't in fact meritocratic. And a central tenant of any sort of rigorous inquisitive process is that we need to be able to construct a formal opposing argument. I can test whether a society is democratic, do the people vote, is the result of the vote respected? I can't test if a society is meritocratic.

It's unhealthy when we consider or groups, or cultures, or our societies as being meritocratic. It makes us ignore questions about who our leaders are, how they got there who isn't represented. The best we can say is that maybe our organizations are (perceptions of subjective merit)-ocracies, which is profoundly different from what we mean when we say meritocracy.

I'd like to encourage groups that self-identify as being meritocratic (such as The Gnome Foundation, The Apache Software Foundation, Mozilla, The Document Foundation, and The Django Software Foundation) to reconsider this. Aspiring to meritocracy is a reasonable, it makes sense to want for the people who are best capable of doing so to lead us, but it's not something we can ever say we've achieved.

You can find the rest here. There are view comments.

Thoughts on Lavabit

Posted October 2nd, 2013. Tagged with politics, security, ethics.

If you haven't already, you should start by reading Wired's article on this.

I am not a lawyer. That said, I want to walk through my take on each stage of this.

The government served Lavabit with an order requiring them to supply metadata about every email, as well as mailbox accesses, for a specific user. Because this was "metadata" only the government was not required to supply probable cause.

First, it should be noted that metadata isn't a thing. There's not a definition, it has no meaning. There's simply data.

Lavabit refused to comply, whereupon the government filed a motion requiring them to comply, which a US magistrate so ordered.

And here's where things go wrong. The magistrate erred in ordering compliance. While an argument could be made (note: I'm not making this argument) that in general, certain metadata does not have an expectation of privacy, Lavabit operates a specialized service. Immediately upon receipt of mail, it's encrypted with a user's public key. After that it's technically impossible for the service to read the plaintext of a user's email. This relationship creates a strong expectation of privacy, and the Fourth Amendment very explicitly requires a warrant supported by probably cause at this point.

But let's ignore this first order. Lavabit has, in past, complied with lawful search warrants, and there's no reason to believe they would not have been able to comply with a lawfully constructed one here.

Following this the FBI obtained a warrant requiring that Lavabit turn over their SSL private key. The application for, and issue of, this warrant unambiguously violated Lavabit's constitutional protection. The Fourth Amendment requires that a warrant describe specifically where is to be searched, and what they're looking for.

Access to Lavabit's private key would allow someone with the raw internet traffic (which presumably the FBI, had access to) to decrypt and read any user's emails before they reached Lavabit's servers. Simply put, this was a warrant issued in flagrant violation of the United State Constitution.

The fact that Lavabit refused to cooperate with the government's original order in no way gave them the right to apply (or be granted) the follow up order. Failure to comply with a lawfully issued warrant can result in fines, or even jail time, but it does not grant the government extra-legal authority.

The entirety of this case, but particularly the government's second request, demonstrate a travesty of immense proportions. The assumptions I grew up with about my legal protections as an American are rapidly being shown to be illusory. Lavabit's founder is raising money to support his legal defense, I've donated and I hope you will too.

You can find the rest here. There are view comments.

Effective Code Review

Posted September 26th, 2013. Tagged with openstack, python, community, django, open-source.

Maybe you practice code review, either as a part of your open source project or as a part of your team at work, maybe you don't yet. But if you're working on a software project with more than one person it is, in my view, a necessary piece of a healthy workflow. The purpose of this piece is to try to convince you its valuable, and show you how to do it effectively.

This is based on my experience doing code review both as a part of my job at several different companies, as well as in various open source projects.

What

It seems only seems fair that before I try to convince you to make code review an integral part of your workflow, I precisely define what it is.

Code review is the process of having another human being read over a diff. It's exactly like what you might do to review someone's blog post or essay, except it's applied to code. It's important to note that code review is about code. Code review doesn't mean an architecture review, a system design review, or anything like that.

Why

Why should you do code review? It's got a few benefits:

  • It raises the bus factor. By forcing someone else to have the familiarity to review a piece of code you guarantee that at least two people understand it.
  • It ensures readability. By getting someone else to provide feedback based on reading, rather than writing, the code you verify that the code is readable, and give an opportunity for someone with fresh eyes to suggest improvements.
  • It catches bugs. By getting more eyes on a piece of code, you increase the chances that someone will notice a bug before it manifests itself in production. This is in keeping with Eric Raymond's maxim that, "given enough eyeballs, all bugs are shallow".
  • It encourages a healthy engineering culture. Feedback is important for engineers to grow in their jobs. By having a culture of "everyone's code gets reviewed" you promote a culture of positive, constructive feedback. In teams without review processes, or where reviews are infrequent, code review tends to be a tool for criticism, rather than learning and growth.

How

So now that I've, hopefully, convinced you to make code review a part of your workflow how do you put it into practice?

First, a few ground rules:

  • Don't use humans to check for things a machine can. This means that code review isn't a process of running your tests, or looking for style guide violations. Get a CI server to check for those, and have it run automatically. This is for two reasons: first, if a human has to do it, they'll do it wrong (this is true of everything), second, people respond to certain types of reviews better when they come from a machine. If I leave the review "this line is longer than our style guide suggests" I'm nitpicking and being a pain in the ass, if a computer leaves that review, it's just doing it's job.
  • Everybody gets code reviewed. Code review isn't something senior engineers do to junior engineers, it's something everyone participates in. Code review can be a great equalizer, senior engineers shouldn't have special privledges, and their code certainly isn't above the review of others.
  • Do pre-commit code review. Some teams do post-commit code review, where a change is reviewed after it's already pushed to master. This is a bad idea. Reviewing a commit after it's already been landed promotes a feeling of inevitability or fait accompli, reviewers tend to focus less on small details (even when they're important!) because they don't want to be seen as causing problems after a change is landed.
  • All patches get code reviewed. Code review applies to all changes for the same reasons as you run your tests for all changes. People are really bad at guessing the implications of "small patches" (there's a near 100% rate of me breaking the build on change that are "so small, I don't need to run the tests"). It also encourages you to have a system that makes code review easy, you're going to be using it a lot! Finally, having a strict "everything gets code reviewed" policy helps you avoid arguments about just how small is a small patch.

So how do you start? First, get yourself a system. Phabricator, Github's pull requests, and Gerrit are the three systems I've used, any of them will work fine. The major benefit of having a tool (over just mailing patches around) is that it'll keep track of the history of reviews, and will let you easily do commenting on a line-by-line basis.

You can either have patch authors land their changes once they're approved, or you can have the reviewer merge a change once it's approved. Either system works fine.

As a patch author

Patch authors only have a few responsibilities (besides writing the patch itself!).

First, they need to express what the patch does, and why, clearly.

Second, they need to keep their changes small. Studies have shown that beyond 200-400 lines of diff, patch review efficacy trails off [1]. You want to keep your patches small so they can be effectively reviewed.

It's also important to remember that code review is a collaborative feedback process if you disagree with a review note you should start a conversation about it, don't just ignore it, or implement it even though you disagree.

As a review

As a patch reviewer, you're going to be looking for a few things, I recommend reviewing for these attributes in this order:

  • Intent - What change is the patch author trying to make, is the bug they're fixing really a bug? Is the feature they're adding one we want?
  • Architecture - Are they making the change in the right place? Did they change the HTML when really the CSS was busted?
  • Implementation - Does the patch do what it says? Is it possibly introducing new bugs? Does it have documentation and tests? This is the nitty-gritty of code review.
  • Grammar - The little things. Does this variable need a better name? Should that be a keyword argument?

You're going to want to start at intent and work your way down. The reason for this is that if you start giving feedback on variable names, and other small details (which are the easiest to notice), you're going to be less likely to notice that the entire patch is in the wrong place! Or that you didn't want the patch in the first place!

Doing reviews on concepts and architecture is harder than reviewing individual lines of code, that's why it's important to force yourself to start there.

There are three different types of review elements:

  • TODOs: These are things which must be addressed before the patch can be landed; for example a bug in the code, or a regression.
  • Questions: These are things which must be addressed, but don't necessarily require any changes; for example, "Doesn't this class already exist in the stdlib?"
  • Suggestions for follow up: Sometimes you'll want to suggest a change, but it's big, or not strictly related to the current patch, and can be done separately. You should still mention these as a part of a review in case the author wants to adjust anything as a result.

It's important to note which type of feedback each comment you leave is (if it's not already obvious).

Conclusion

Code review is an important part of a healthy engineering culture and workflow. Hopefully, this post has given you an idea of either how to implement it for your team, or how to improve your existing workflow.

[1]http://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/

You can find the rest here. There are view comments.

Being negative

Posted September 22nd, 2013. Tagged with thinking, community.

From time to time I joke that Bob Knight stole the title of my autobiography with his, which is titled "The Power of Negativity". I've never read the book, but it's very easy for me to imagine how it could apply to me. Many people who know me would immediately identify me as a negative person. They're not wrong, and it's a constant source of struggle for me.

To be clear: I'm sarcastic, I'm critical, I'm a perfectionist and impossible to impress, and I have a capacious ego. As a result of which I almost universally have a problem with any technology I come across, I have a critique to offer of nearly everything, both social and technical.

Some of this is probably my "personality" [1], but a lot of it is intentional. I'm deliberately negative about many things. There's a few reasons for this. First, I'm good at it, I seem to have an ability to identify and articulate problems with things. I also think it's important, when things are not perfect (and they so rarely are), we have a responsibility to speak honestly about them, and to discuss their flaws with the same prominence we discuss their features. Finally, articulating problems with things is one of the ways I learn best. Much of my philosophy about software, and the world, has been formed by identifying problems with the things that exist today.

The conflict about this negativity for me comes from two places. First, the effect it has on other people. For many people, when they see this negativity it has a demoralizing effect on them, they lose interest in something as a result. In particular I'm concerned that my attitudes could be an discouraging to people getting into software development; James Coglan wrote a thing about this, and I certainly don't want to be part of the problem, particularly given how much I've invested in trying to make the tech community more, not less, welcoming . The second conflict comes from the fact that I am, at heart, a boundlessly optimistic person. A strong complement to my negativity is an unyielding belief that we must and can fix all of these things.

Where does this leave me? Uncertain. It is truly important to me that I continue to cast a critical eye on everything, including playing the devil's advocate; it's part of how I learn, and learning is very much something I want to continue to do. But I don't want to ever be why someone is afraid to get involved in programming, in open source, in speaking, or in anything else, because they're afraid I'll do nothing but critique their work. I don't know how to resolve this tension. For the past few months I've been trying to be less negative and angry on Twitter, I don't know how successful I'm being. I hope you'll try to help by letting me know when I've got over the line.

[1]This isn't to say it's intrinsic, or immutable, but simply that it's not a conscious thing.

You can find the rest here. There are view comments.

Doing a release is too hard

Posted September 17th, 2013. Tagged with openstack, django, python, open-source.

I just shipped a new release of alchimia. Here are the steps I went through:

  • Manually edit version numbers in setup.py and docs/conf.py. In theory I could probably centralize this, but then I'd still have a place I need to update manually.
  • Issue a git tag (actually I forgot to do that on this project, oops).
  • python setup.py register sdist upload -s to build and upload some tarballs to PyPi
  • python setup.py register bdist_wheel upload -s to build and upload some wheels to PyPi
  • Bump the version again for the now pre-release status (I never remeber to do this)

Here's how it works for OpenStack projects:

  • git tag VERSION -s (-s) makes it be a GPG signed tag)
  • git push gerrit VERSION this sends the tag to gerrit for review

Once the tag is approved in the code review system, a release will automatically be issue including:

  • Uploading to PyPi
  • Uploading documentation
  • Landing the tag in the official repository

Version numbers are always automatically handled correctly.

This is how it should be. We need to bring this level of automation to all projects.

You can find the rest here. There are view comments.

You guys know who Philo Farnsworth was?

Posted September 15th, 2013. Tagged with django, python, open-source, community.

Friends of mine will know I'm a very big fan of the TV show Sports Night (really any of Aaron Sorkin's writing, but Sports Night in particular). Before you read anything I have to say, take a couple of minutes and watch this clip:

I doubt Sorkin knew it when he scripted this (I doubt he knows it now either), but this piece is about how Open Source happens (to be honest, I doubt he knows what Open Source Software is).

This short clip actually makes two profound observations about open source.

First, most contribution are not big things. They're not adding huge new features, they're not rearchitecting the whole system to address some limitation, they're not even fixing a super annoying bug that affects every single user. Nope, most of them are adding a missing sentence to the docs, fixing a bug in a wacky edge case, or adding a tiny hook so the software is a bit more flexible. And this is fantastic.

The common wisdom says that the thing open source is really bad at is polish. My experience has been the opposite, no one is better at finding increasingly edge case bugs than open source users. And no one is better at fixing edge case bugs than open source contributors (who overlap very nicely with open source users).

The second lesson in that clip is about how to be an effective contributor. Specifically that one of the keys to getting involved effectively is for other people to recognize that you know how to do things (this is an empirical observation, not a claim of how things ought to be). How can you do that?

  • Write good bug reports. Don't just say "it doesn't work", if you've been a programmer for any length of time, you know this isn't a useful bug report. What doesn't work? Show us the traceback, or otherwise unexpected behavior, include a test case or instructions for reproduction.
  • Don't skimp on the details. When you're writing a patch, make sure you include docs, tests, and follow the style guide, don't just throw up the laziest work possible. Attention to detail (or lack thereof) communicates very clearly to someone reviewing your work.
  • Start a dialogue. Before you send that 2,000 line patch with that big new feature, check in on the mailing list. Make sure you're working in a way that's compatible with where the project is headed, give people a chance to give you some feedback on the new APIs you're introducing.

This all works in reverse too, projects need to treat contributors with respect, and show them that the project is worth their time:

  • Follow community standards. In Python this means things like PEP8, having a working setup.py, and using Sphinx for documentation.
  • Have passing tests. Nothing throws me for a loop worse than when I checkout a project to contribute and the tests don't pass.
  • Automate things. Things like running your tests, linters, even state changes in the ticket tracker should all be automated. The alternative is making human beings manually do a bunch of "machine work", which will often be forgotten, leading to a sub-par experience for everyone.

Remember, Soylent Green Open Source is people

That's it, the blog post's over.

You can find the rest here. There are view comments.