Friday, March 28, 2014

Books: Two Scoops of Django: Best Practices For Django 1.6

I just finished reading the book Two Scoops of Django: Best Practices For Django 1.6. I had already reviewed the previous edition, so I was anxious to see what had changed. In short, I loved it!

It's not an introduction, tutorial, or a reference for Django. In fact, it assumes you've already gone through the Django tutorial, and it occasionally refers you to the Django documentation. Rather, it tells you what you should and shouldn't do to use Django effectively. It's very prescriptive, and it has strong opinions. I've always enjoyed books like that. Best of all, it's only about 400 pages long, and it's very easy to read.

This edition is 100 pages longer than the previous edition, and I really enjoyed the new chapters. It has even more silly drawings and creamy ice cream analogies than the original, and even though I'm lactose intolerant, that made the book a lot of fun to read.

Having read the book cover-to-cover, even though I'm fairly new to Django, I kind of feel like I know what I'm doing at this point--at least a little ;) I was afraid when I started using Django that I wouldn't have that feeling until I had used it for a couple years.

So, if you're using Django, I highly recommend it!

Thursday, February 27, 2014

Python: A Response to Glyph's Blog Post on Concurrency

If you haven't seen it yet, Glyph wrote a great blog post on concurrency called Unyielding. This blog post is a response to that blog post.

Over the years, I've tried each of the approaches he talks about while working at various companies. I've known all of the arguments for years. However, I think his blog post is the best blog post I've read for the arguments he is trying to make. Nice job, Glyph!

In particular, I agree with his statements:

What I hope I’ve demonstrated is that if you agree with me that threading has problematic semantics, and is difficult to reason about, then there’s no particular advantage to using microthreads, beyond potentially optimizing your multithreaded code for a very specific I/O bound workload.
There are no shortcuts to making single-tasking code concurrent. It's just a hard problem, and some of that hard problem is reflected in the difficulty of typing a bunch of new concurrency-specific code.

In this blog post, I'm not really disputing his core message. Rather, I'm just pointing out some details and distinctions.

First of all, it threw me off when he mentioned JavaScript since JavaScript doesn't have threads. In the browser, it has web workers which are like processes, and in Node, it has a mix of callbacks, deferreds, and yield. However, reading his post a second time, all he said was that JavaScript had "global shared mutable state". He never said that it had threads.

The next thing I'd like to point out is that there are some real readability differences between the different approaches. Glyph did a good job of arguing that it's difficult to reason about concurrency when you use threads. However, if you ignore race conditions for a moment: I think it's certainly true that threads, explicit coroutines, and green threads are easier to read than callbacks and deferreds. That's because they let you write code in a more traditional, linear fashion. Even though I can do it, using callbacks and deferreds always cause my brain to hurt ;) Perhaps I just need more practice.

Another thing to note is that the type of application matters a lot when you need to address concurrency concerns. For instance, if you're building a UI, you don't want any computationally heavy work to be done on the UI thread. For instance, in Android, you do as little CPU heavy and IO heavy work as possible on the UI thread, and instead push that work off into other threads.

Other things to consider are IO bound vs. CPU bound, stateful vs. stateless.

Threads are fine, if all of the following are true:

  • You're building a stateless web app.
  • You're IO bound.
  • All mutable data is stored in a per-request context object, in per-request instances, or in thread-local storage.
  • You have no module-level or class-level mutable data.
  • You're not doing things like creating new classes or modules on the fly.
  • In general, threads don't interact with each other.
  • You keep your application state in a database.

Sure there's always going to be some global, shared, mutable data such as sys.modules, but in practice Python itself protects that using the GIL.

I've built apps such as the above in a multithreaded way for years, and I've never run into any race conditions. The difference between this sort of app and the app that lead to Glyph's "buggiest bug" is that he was writing a very stateful application server.

I'd also like to point out that it's important to not overlook the utility of UNIX processes. Everyone knows how useful the multiprocessing module is and that processes are the best approach in Python for dealing with CPU bound workloads (because you don't have to worry about the GIL).

However, using a pre-fork model is also a great way of building stateless web applications. If you have to handle a lot of requests, but you don't have to handle a very large number simultaneously, pre-forked processes are fine. The upside is that the code is both easy to read (because it doesn't use callbacks or deferreds), and it's easy to reason about (because you don't have the race conditions that threads have). Hence, a pre-fork model is great for programmer productivity. The downside is that each process can eat up a lot of memory. Of course, if your company makes it to the point where hardware efficiency costs start outweighing programmer efficiency costs, you have what I like to call a "nice to have problem". PHP and Ruby on Rails have both traditionally used a pre-fork approach.

I'm also a huge fan of approaches such as Erlang that give you what is conceptually a process, without the overhead of real UNIX processes.

As Glyph hinted at, this is a really polarizing issue, and there really are no easy, perfect-in-every-way solutions. Any time concurrency is involved, there are always going to be some things you need to worry about regardless of which approach to concurrency you take. That's why we have things like databases, transactions, etc. It's really easy to fall into religious arguments about the best approach to concurrency at an application level. I think it's really helpful to be frank and honest about the pros and cons of each approach.

That being said, I do look forward to one day trying out Guido's asyncio module.

See also:

Thursday, January 30, 2014

Python: A lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest

virtualenv

virtualenv is a tool for installing Python packages locally (i.e. local to a particular project) instead of globally. Here's how to get everything setup:

# Make sure you're using the version of Python you want to use.
which python

sudo easy_install -U setuptools
sudo easy_install pip
sudo pip install virtualenv

Now, let's setup a new project:

mkdir ~/Desktop/sfpythontesting
cd ~/Desktop/sfpythontesting
virtualenv env

# Do this anytime you want to work on the application.
. env/bin/activate

# Make sure that pip is running from within the env.
which pip

pip install nose
pip install mock
pip freeze > requirements.txt

# Now that you've created a requirements.txt, other people can just run:
# pip install -r requirements.txt

nose

Nose is a popular Python testing library. It simple and powerful.

Create a file, ~/Desktop/sfpythontesting/sfpythontesting/main.py with the following:

import random

def sum(a, b):
  return a + b

Now, create another file, ~/Desktop/sfpythontesting/tests/test_main.py with the following:

from nose.tools import assert_equal, assert_raises
import mock

from sfpythontesting import main

def test_sum():
  assert_equal(main.sum(1, 2), 3)

To run the tests:

nosetests --with-doctest

Testing a function that raises an exception

Add the following to main.py:

def raise_an_exception():
  raise ValueError("This is a ValueError")

And the following to test_main.py:

def test_raise_an_exception():
  with assert_raises(ValueError) as context:
    main.raise_an_exception()
  assert_equal(str(context.exception), "This is a ValueError")

Your tests should still be passing.

Monkeypatching

Sometimes there are parts of your code that are difficult to test because they involve randomness, they are time dependent, or they involve external things such as third-party web services. One approach to solving this problem is to use a mocking library to mock out those sorts of things:

Add the following to main.py:

def make_a_move_with_mock_patch():
  """Figure out what move to make in a hypothetical game.

  Use random.randint in part of the decision making process.

  In order to test this function, you have to use mock.patch to monkeypatch random.randint.

  """
  if random.randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

Now, add the following to test_main.py. This code dynamically replaces random.randint with a mock (that is, a fake version) thereby allowing you to make it return the same value every time.

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_attack(randint_mock):
  randint_mock.return_value = 0
  assert_equal(main.make_a_move_with_mock_patch(), "Attack!")

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_defend(randint_mock):
  randint_mock.return_value = 1
  assert_equal(main.make_a_move_with_mock_patch(), "Defend!")

Your tests should still be passing.

Here's a link to a more detailed article on the mock library.

Dependency injection

Another approach to this same problem is to use dependency injection. Add the following to main.py:

def make_a_move_with_dependency_injection(randint=random.randint):
  """This is another version of make_a_move.

  Accept the randint *function* as a parameter so that the test code can inject a different
  version of the randint function.

  This is known as dependency injection.

  """
  if randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

And add the following to test_main.py. Instead of letting make_a_move_with_dependency_injection use the normal version of randint, we pass in our own special version:

def test_make_a_move_with_dependency_injection_can_attack():
  def randint(a, b): return 0
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Attack!")

def test_make_a_move_with_dependency_injection_can_defend():
  def randint(a, b): return 1
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Defend!")

To learn more about dependency injection in Python, see this talk by Alex Martelli.

Since monkeypatching and dependency injection can solve similar problems, you might be wondering which one to use. This turns out to be sort of a religious argument akin to asking whether you should use Vi or Emacs. Personally, I recommend using a combination of PyCharm and Sublime Text ;)

My take is to use dependency injection when you can, but fall back to monkeypatching when using dependency injection becomes impractical. I also recommend that you not get bent out of shape if someone disagrees with you on this subject ;)

doctest

One benefit of using nose is that it can automatically support a wide range of testing APIs. For instance, it works with the unittest testing API as well as its own testing API. It also supports doctests which are tests embedded inside of the docstrings of normal Python code. Add the following to main.py:

def hello_doctest(name):
  """This is a Hello World function for using Doctest.

  >>> hello_doctest("JJ")
  'Hello, JJ!'

  """
  return "Hello, %s!" % name

Notice the docstring serves as both a useful example as well as an executable test. Doctests have fallen out of favor in the last few years because if you overuse them, they can make your docstrings really ugly. However, if you use them to make sure your usage examples keep working, they can be very helpful.

Conclusion

Ok, there's my lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest. Obviously I've only just scratched the surface. However, hopefully I've given you enough to get started!

As I mentioned above, people tend to have really strong opinions about the best approaches to testing, so I recommend being pragmatic with your own tests and tolerant of other people's strong opinions on testing. Furthermore, testing is a skill, kind of like coding Python is a skill. To get really good at it, you're going to need to learn a lot more (perhaps by reading a book) and practice. It'll get easier with time.

If you enjoyed this blog post, you might also enjoy my other short blog post, The Zen of Testing. Also, here's a link to the code I used above.

Tuesday, January 14, 2014

Interesting Computer Failures from the Annals of History

If you're young enough to not know what "Halt and Catch Fire", "Killer poke", and "lp0 on fire" are, here's a fun peek at some of the more interesting computer failures, failure modes, and failure messages from the annals of computer history:

Thanks go to Chris Dudte for first introducing me to "Halt and Catch Fire" ;)

Thursday, January 09, 2014

Humor: More Knuth Jokes

When Knuth implements tail call optimization, it's actually faster than iteration.

All of Knuth's loops terminate...with extreme prejudice.

The NSA is permanently parked outside of Knuth's house, hoping that he might help them crack public key encryption. Sometime last year, Knuth gave them a copy of "The Art of Computer Programming", but refused to tell them which page the algorithm was on.

Knuth taught a group of kids how to use their fingers as abacuses. It turns out that his method is Turing Complete.

Python: My Notes from Yesterday's SF Python Meetup

Here are my notes from yesterday's SF Python Meetup:

Embed Curl

John Sheehan.

embedcurl.com

It creates a pretty version of a curl command and the output. You can embed it in your site.

shlex is a module in Python to do simple lexical analysis.

1-Click Deployment with Launch and Docker

Nate Aune @natea

There are 10 million repos on GitHub. The curve is exponential.

appsembler.com

It launches a Docker container.

It makes it easy to deploy certain types of apps.

You can embed a widget on your app that says "Launch demo site".

docker.io (see also docker.io/learn_more/)

He talked about Containers vs. VMs.

Containers share the OS, so they launch very quickly.

You can create new containers, and each container is just a diff of another container, so it uses very little space.

Yelp: Building a Python Service Stack

Julian Krause, John Billings

They're moving toward a Service Oriented Architecture.

There are over 100 engineers at Yelp.

They have about 180k lines of code in a Python webapp called yelp-main.

This has increased the amount of time to come up to speed and release new features.

They're splitting their large codebase into a lot of little Python codebases that speak HTTP/REST.

Example: metrics = json.loads(urllib2.urlopen(url).read())

They were using Tornado 1.

"Global dependencies considered harmful."

They couldn't upgrade to Tornado 2 because there were too many dependencies on Tornado 1.

They're using virtualenv now.

He thinks that virtualenv's bin/activate is doing it wrong. It should work slightly differently.

I mentioned that one of the problems he was trying to solve could be solved by PEX.

Future directions for isolation: Docker.

They're using pip.

wheel is a built-package format for Python.

pip install -r requirements.txt

Always use specific versions in your requirements.txt. Use ==.

Originally, they were using git submodules. They're not great.

They have separate repos for everything, and they release libraries for everything. They have a tool that monitors git tags.

They use Jenkins.

They use pypiserver.

They're switching from Tornado to Pyramid. It's been a successful migration.

There were issues in Tornado including testing.

Application servers: gunicorn, modwsgi, Circus, Pylons/waitress, and uWSGI

They evaluated all of them and picked uWSGI. It's working well. It's stable. It's fast. A lot of it is written in C. It has good documentation. They can integrate the logging with Scribe. The community is good. They have proper rolling restarts for their Java apps. uWSGI has hot reloading.

Metrics, metrics, metrics!

What is the 99th percentile time for this endpoint?

Are all service instances slow, or is it just one?

How many QPS is this endpoint handling?

Which downstream service is killing our performance?

Are any clients still using the old API?

Did the new service version introduce a performance regression?

They use a Metrics package for their Java code.

They wrote a package for Python called uwsgi_metrics. It's not open source yet, but they'll open source it shortly.

Example: with uwsgi_metrics.timing('foo'): ...

They have a JSON endpoint on all their services that exposes metrics.

uwsgi uses a prefork worker model.

uwsgi has mule processes. They're processes that don't handle interactive traffic, but you can use them as worker processes.

They measured a 50us overhead for recording a metric. You don't want to do this for too many metrics. 10s of metrics is okay. 1000s of metrics isn't.

airbnb/nerve is a service registration daemon that performs health checks.

airbnb/synapse is a transparent service discovery framework for connecting an SOA.

He showed service registration using Nerve. It sends stuff to ZooKeeper.

ZooKeeper is a highly available key value store with nice consistency guarantees.

They use HAProxy.

ZooKeeper -> Synapse -> HAProxy

Client -> HAProxy -> Service hosts

They have an operations dashboard.

There's no static configuration. If a service is running, it appears in the dashboard.

They're not using Smart Stack in production yet.

They have a service called Service Docs. When you build a service, all the docs get put on this website, keyed by service name.

People almost always end up writing client libraries for services. If you don't write one up front, you'll end up writing one implicitly anyway.

The nerve thing should be running on the same machines as the services.

They use memcache within services. They don't yet put caches in front of services.

They're thinking about putting Nginx in front of their services to add a little HTTP caching.

They're still investigating security between services.

WTF is PEX

Brian Wickman.

This is the first time Brian has really formally announced PEX.

This is a shortened version of an internal talk he gave, so I'm not going to take notes for everything he said.

You can create a __main__.py, and then run "python .", and it'll work.

pip search twitter.common

pip install twitter.common.python

This gives you the pex command.

A .pex is a ZIP file containing Python code that's executable. It's used to "compile" your Python projects down to a single file.

You can also use it to create a file that acts like a Python interpreter with all the requirements bundled ahead of time into it.

pex -r flask -p flex.pex

./flask.pex hello_world.py

You can use PEX to easily create self-contained Python applications.

Twitter uses pants. It's a build tool.

Aurora is a service scheduler built on top of Mesos.

Download Aurora to see an example of something that uses pants.

Pants builds modularity into your monorepo.

pants is like blaze at Google.

Pants is multi-interpreter and multi-platform. The pex files work on multiple platforms.

Aurora is half Java and half Python.

A .pex file is similar to a Java .war file.

Thursday, January 02, 2014

Shooting a Screencast that Shows an IDE

Many moons ago, I had to record a screencast. Most of the screencast was spent looking at code in an IDE. I wanted the IDE to be fullscreen, but I also wanted the text to be readable, even when the viewer wasn't watching fullscreen. Furthermore, I didn't want to spend all day zooming in on the cursor; I wanted things to "just work". After playing around with settings way too much, this is what worked for me:

  • I used Camtasia for Mac.
  • I plugged my laptop into my TV (using an HDMI cable) and configured the screen to be 720p. That's the only easy way I know of to get the screen to be exactly 720p.
  • I recorded the video at 720p. Hence, what was in the video matched 1 to 1 with what was on my screen.

Here's a link to the original video. The video is quite viewable in a normal YouTube window, but if you go fullscreen, it looks even better. The text is very crisp once YouTube switches over to the 720p version, but it's still readable even in lower bandwidth environments.

Friday, December 20, 2013

Making a Piña Colada in Haskell: It's All About Concurrency

I was reading The Pragmatic Programmer this morning, and it got me to thinking about Haskell.

Consider the following "function" for creating a piña colada:

  • Open blender
  • Open piña colada mix
  • Put mix in blender
  • Measure 1/2 cup white rum
  • Pour in rum
  • Add 2 cups of ice
  • Close blender
  • Liquefy for 2 minutes
  • Open blender
  • Get glasses
  • Get pink umbrellas
  • Serve

It's very easy to understand and very linear.

Now consider the following diagram that conveys which parts can be done concurrently:

This description of the recipe is quite a bit more complex, but it's a lot more obvious which things can be done concurrently.

There are a lot of approaches to concurrency. For years, we've relied on our CPUs to give us some implicit concurrency. The CPU can look at the code at a very micro level and figure out which assembly instructions can be done concurrently because they're working with different parts of memory, etc.

Threads and processes also provide concurrency, but they're at a very different level, and it's very far from implicit.

Node.js also provides concurrency. However, telling Node.js which things can be done concurrently while responding to a request still takes a lot of careful thinking. You don't have to do anything to get Node.js to handle multiple requests at the same time. However, if you need to make three REST calls in order to respond to a particular request, it's up to you to notice whether or not you can do those calls concurrently, and if you do decide you can do them concurrently, it still takes some explicit coding to make it happen.

One of the essential problems is that it takes work to get from that nice linear description of how to make a piña colada to one in which all the opportunities for concurrency are explicitly stated. That's what got me thinking about Haskell again. Maybe laziness isn't such a bad idea after all ;) In Haskell, it's a lot easier to separate describing the steps necessary to do something from actually taking those steps. You can describe the steps in a way that makes sense to you, but let Haskell decide at runtime what order to take those steps in. Admittedly, I'm hand waiving a lot, and I haven't actually read Parallel and Concurrent Programming in Haskell, but I just have this feeling that Haskell makes you think in a way that the concurrent description of the recipe above will just kind of happen naturally.

Here's a fun exercise for the reader. Write a program that implements each of the steps above by doing a REST call. For instance, to "Open blender", just do a post to some server with the string "Open blender". The server doesn't really have to do anything (we'll just pretend). Implement the entire recipe using the opportunities for concurrency provided by the diagram above. Can your system figure out what things can be done concurrently automatically and then do them concurrently automatically? Bonus points if you can you make multiple piña coladas at the same time. Double bonus points if you can write code that is somewhat readable by someone unfamiliar with your system.

Footnotes:

There are a lot of things related to what I'm talking about such as Flow-based programming, the Actor model, etc. On the other hand, perhaps there are no silver bullets.

I'm not a real Haskell programmer or a Node.js programmer.

In fact, I've never even had a piña colada, so I could be way off base here ;)