Wednesday, April 23, 2014

PyCon Notes: PostgreSQL Proficiency for Python People

In summary, this tutorial was fantastic! I learned more in three hours than I would have learned if I had read a whole book!

Here's the video. Here are the slides. Here are my notes:

Christoph Pettus was the speaker. He's from PostgreSQL Experts.

PostgreSQL is a rich environment.

It's fully ACID compliant.

It has the richest set of features of any modern, production RDMS. It has even more features than
Oracle.

PostgreSQL focuses on quality, security, and spec compliance.

It's capable of very high performance: tens of thousands of transactions per second, petabyte-sized data sets, etc.

To install it, just use your package management system (apt, yum, etc.). Those systems will usually take care of initialization.

There are many options for OS X. Heroku even built a Postgres.app that runs more like a foreground app.

A "cluster" is a single PostgreSQL server (which can manage multiple databases).

initdb creates the basic file structure. PostgreSQL has to be up and running to run initdb.

To create a database:

sudo su - postgres
psql

create database this_new_database;

To drop a database:

drop database this_new_database;

Debian runs initdb for you. Red Hat does not.

Debian has a cluster management system. Use it. See, for instance, pg_createcluster.

Always create databases as UTF-8. Once you've created it, you can't change it.

Don't use SQLASCII. It's a nightmare. Don't use "C locale".

pg_ctl is a built-in command to start and stop PostgreSQL:

cd POSTGRES_DIRECTORY
pg_ctl -D . start

Usually, pg_ctl is wrapped by something provided by your platform.

On Ubuntu, start PostgreSQL via:

service postgresql start

Always use "-m fast" when stopping.

Postgres puts its own data in a top-level directory. Let's call it $PGDATA.

Don't monkey around with that data.

pg_clog and pg_xlog are important. Don't mess with them.

On most systems, configuration lives in $PGDATA.

postgresql.conf contains server configuration.

pg_hba.conf contains authentication settings.

postgresql.conf can feel very overwhelming.

Avoid making a lot of changes to postgresql.conf. Instead, add the following to it:

include "postgresql.conf.include"

Then, mess with "postgresql.conf.include".

The important parameters fall into these categories: logging, memory, checkpoints, and the planner.

Logging:

Be generous with logging. It has a very low impact on the system. It's your best source of info for diagnosing problems.

You can log to syslog or log CSV to files. He showed his typical logging configuration.

He showed his guidelines / heuristics for all the settings, including how to finetune things. They're really good! See his slides.

As of version 9.3, you don't need to tweak Linux kernel parameters anymore.

Do not mess with fsync or  synchronous_commit.

Most settings require a server reload to take effect. Some things require a server restart. Some can be set on a per-session basis. Here's how to do that. This is also an example of how to use a transaction:

begin;
set local random_page_cost = 2.5;
show random_page_cost;
abort;

pg_hba.conf contains users and roles. Roles are like groups. They form a hierarchy.

A user is just a role with login privs.

Don't use the "postgres" superuser for anything application-related.

Sadly, you probably will have to grant schema-modification privs to your app user if you use migrations, but if you don't have to, don't.

By default, DB traffic is not encrypted. Turn on SSL if you are running in a cloud provider.

In pg_hba.conf, "trust" means if they can log into the server, they can access Postgres too. "peer" means they can have a Postgres user that matches their username. "md5" is an md5 hash password.

It's a good idea to restrict the IP addresses allowed to talk to the server fairly tightly.

The WAL

The Write-Ahead Log is key to many Postgres operations. It's the basis for replication, crash recovery, etc.

When each transaction is committed, it is logged to the write-ahead log.

Changes in the transaction are flushed to disk.

If the system crashes, the WAL is "replayed" to bring the DB to a consistent state.

It's a continuous record of changes since the last checkpoint.

The WAL is stored in 16MB segments in the pg_xlog directory.

Never delete anything from pg_xlog.

archive_command is a way to move the WAL segments to someplace safe (like a
different system).

By default, synchronous_commit is on, which means that commits do not return until the WAL flush is done. If you turn it off, they'll return when the WAL flush is queued. You might lose transactions in the case of a crash, but there's no risk of database corruption.

Backup and Recovery

Experience has shown that 20% of the time, your EBS volumes will not reattach when you reboot in AWS.

pg_dump is a built-in dump/restore tool.

It takes a logical snapshot of the database.

It doesn't lock the database or prevent writes to disk.

pg_restore restores the database. It's not fast.

It's great for simple backups but not suitable for fast recovery from major failures.

pg_bench is the built in benchmarking tool.

pg_dump -Fc --verbose example > example.dump

Without the -Fc, it dumps SQL commands instead of its custom format.

pg_restore --dbname=example_restored --verbose example.dump

pg_restore takes a long time because it has to recreate indexes.

pg_dumpall --globals-only

Back up each database with pg_dump using --format=custom.

To do a parallel restore, use --jobs=.

If you have a large database, pg_dump may not be appropriate.

A disk snapshot + every WAL segment is enough to recreate the database.

To start a PITR (point in time recovery) backup:

select pg_start_backup(...);

Copy the disk image and any WAL files that are created.

select pg_stop_backup();

Make sure you have all the WAL segments.

The disk image + all the WAL segments are enough to create the DB.

See also github.com/wal-e/wal-e. It's highly recommended.

It automates backups to S3.

He explained how to do a PITR.

With PITR, you can rollback to a particular point in time. You don't have to replay everything.

This is super handy for application failures.

RDS is something that scripts all this stuff for you.

Replication

Send the WAL to another server.

Keep the server up to date with the primary server.

That's how PostgreSQL replication works.

The old way was called "WAL Archiving". Each 16MB segment was sent to the secondary when complete. Use rsync, WAL-E, etc., not scp.

The new way is Streaming Replication.

The secondary gets changes as they happen.

It's all setup via recovery.conf in your $PGDATA.

He showed a recovery.conf for a secondary machine, and showed how to let it become the master.

Always have a disaster recovery strategy.

pg_basebackup is a utility for doing a snapshot of a running server. It's the easiest way to take a snapshot to start a new secondary. It's also useful for archival backups. It's not the fastest thing, but it's pretty foolproof.

Replication:

The good:

Easy to setup.

Schema changes are replicated.

Secondaries can handle read-only queries for load balancing.

It either works or it complains loudly.

The bad:

You get the entire DB cluster or none of it.

No writes of any kind to the secondary, not even temporary tables.

Some things aren't replicated like temporary tables and unlogged tables.

His advice is to start with WAL-E. The README tells you everything. It fixes a ton of problems.

The biggest problem with WAL-E is that writing to S3 can be slow.

Another way to do funky things is trigger-based replication. There's a bunch of third-party packages to do this.

Bucardo is one that lets you do multi-master setups.

However, they're fiddly and complex to set up. They can also fail quietly.

Transactions, MVCC, and Vacuum

BEGIN;
INSERT ...;
INSERT ...;
COMMIT;

By the way, no bank works this way ;)

Everything runs inside of a transaction.

If there is no explicit transaction, each statement is wrapped in one for you.

Everything that modifies the database is transactional, even schema changes.

\d shows you all your tables.

With a transaction, you can even rollback a table drop.

South (the Django migration tool) runs the whole migration in a single transaction.

Many resources are held until the end of a transaction. Keep your transactions brief and to the point.

Beware of "IDLE IN TRANSACTION" sessions. This is a problem for Django apps.

A tuple in Postgres is the same thing as a row.

Postgres uses Multi-Version Concurrency Control. Each transaction sees its own version of the database.

Writers only block writers to the same tuple. Nothing else causes blocking.

Postgres will not allow two snapshots to "fork" the database. If two people try to write to the same tuple, Postgres will block one of them.

There are higher isolation modes. His description of them was really interesting.

He suggested that new apps use SERIALIZABLE. This will help you find the concurrency errors in your app.

Deleted tuples are not usually immediately freed.

Vacuum's primary job is to scavenge tuples that are no longer visible to any transaction.

autovacuum generally handles this problem for you without intervention (since version 8).

Run analyze after a major database change to help the planner out.

If someone tells you "vacuum's not working", they're probably wrong.

The DB generally stabilizes at 20% to 50% bloat. That's acceptable.

The problem might be that there are long-running transactions or idle-in-transaction sessions. They'll block vacuuming. So will manual table locking.

He talked about vacuum issues for rare situations.

Schema Design

Normalization is important, but don't obsess about it.

Pick "entities". Make sure that no entity-level info gets pushed into the subsidiary items.

Pick a naming scheme and stick with it.

Plural or singular? DB people tend to like plural. ORMs tend to like singular.

You probably want lower_case to avoid quoting.

Calculated denormalization can sometimes be useful; copied denormalization is almost never useful.

Joins are good.

PostgreSQL executes joins very efficiently. Don't be afraid of them.

Don't worry about large tables joined with small tables.

Use the typing system. It has a rich set of types.

Use domains to create custom types.

A domain is a core type + a constraint.

Don't use polymorphic fields (fields whose interpretation is dependent on another field).

Don't use strings to store multiple types.

Use constraints. They're cheap and fast.

You can create constraints across multiple columns.

Avoid Entity-Attribute-Value schemas. They cause great pain. They're very inefficient. They make reports very difficult.

Consider using UUIDs instead of serials as synthetic keys.

The problem with serials for keys is that merging tables can be hard.

Don't have "Thing" tables like "Object" tables.

If a table has a few frequently-updated fields and a few slowly-updated fields, consider splitting the table. Split the fast-moving stuff out into a separate 1-to-1 table.

Arrays are a first-class type in PostgreSQL. It's a good substitute for using a subsidiary table.

A list of tags is a good fit for arrays.

He talked about hstore. It's much better than Entity-Attribute-Value. It's great for optional, variable attributes. It's like a hash. It can be indexed, searched, etc. It lets you add attributes to tables for users. Don't use it as a way to avoid all table modifications.

json is now a built in type.

There's also jsonb.

Avoid indexes on big things, like 10k character strings.

NULL it a total pain in the neck.

Only use it to mean "missing value".

Never use it to represent a meaningful value.

Let's call anything 1MB or more a "very large object". Store them in files. Store the metadata in the database. The database API is just not a good fit for this.

Many-to-many tables can get extremely large. Consider replacing them with array fields (either one way or both directions). You can use a trigger to maintain integrity.

You don't want more than about 250k entries in an array.

Use UTF-8. Period.

Always use TIMESTAMPTZ (which Django uses by default). Don't use TIMESTAMP. TIMESTAMPTZ is a timestamp converted to UTC.

Index types:

B-Tree

Use a B-Tree on a column if you frequently query on that column,
use one of the comparison operators, only get back 10-15% of the rows,
and run that query frequently.

It won't use the index if you're going to get back more than 15% of
the rows because it's faster to scan a table then scan an index.

Use a partial index if you can ignore most of the rows.

The entire tuple has to be copied into the index.

GiST

It's a framework to create indexes.

KNN indexes are the K-nearest neighbors.

GIN

Generalized inverted index. Used for full-text search.

The others either are not good or very specific.

Why isn't it using my index?

Use explain analyze to look at the query.

If it thinks it's going to require most of the rows, it'll do a table scan.

If it's wrong, use analyze to update the planner stats.

Sometimes, it can't use the index.

Two ways to create an index:

create index

create index concurrently

reindex rebuilds an index from scratch.

pg_stat_user_indexes tells you about how your indexes are being used.

What do you do if a query is slow:

Use explain or explain analyze.

explain doesn't actually run the query.

"Cost" is measured in arbitrary units. Traditionally, they have been "disk fetches". Costs are inclusive of subnodes.

I think explain analyze actually runs the query.

Things that are bad:

Joins between 2 large tables.

Cross joins (cartesian products). These often happen by accident.

Sequential scans on large tables.

select count(*) is slow because it results in a full table scan since you
have to see if the tuples are alive or dead.

offset / limit. These actually run the query and then throw away that many
rows. Beware that GoogleBot is relentless. Use other keys.

If the database is slow:

Look at pg_stat_activity:

select * from pg_stat_activity;

tail -f the logs.

Too much I/O? iostat 5.

If the database isn't responding:

Try connecting with it using psql.

pg_stat_activity

pg_locks

Python Particulars

psycopg2 is the only real option in Python 2.

The result set of a query is loaded into client memory when the query completes. If there are a ton of rows, you could run out of memory. If you want to scroll through the results, use a "named" cursor. Be sure to dispose of it properly.

The Python 3 situation is not so great. There's py-postgresql. It's pure Python.

If you are using Django 1.6+, use the @atomic decorator.

Cluster all your writes into small transactions. Leave read operations outside.

Do all your writes at the very end of the view function.

Multi-database works very nicely with hot standby.

Point the writes at the primary, and the reads at the secondary.

For Django 1.5, use the @xact decorator.

Sloppy transaction management can cause the dreaded Django idle-in-transaction problem.

Use South for database migration. South is getting merged into Django in version 1.7 of Django.

You can use manual migrations for stuff the Django ORM can't specify.

Special Situations

Upgrade to 9.3.4. Upgrade minor versions promptly.

Major version upgrades require more planning. pg_upgrade has to be run when the database is not running.

A full pg_dump / pg_restore is always the safest, although not the most practical.

Always read the release notes.

All parts of a replication set must be upgraded at once (for major versions).

Use copy, not insert, for bulk loading data. psycopg2 has a nice interface. Do a vacuum afterwards.

AWS

Instances can disappear and come back up without instance storage.

EBS can fail to reattach after reboot.

PIOPS are useful (but pricey) if you are using EBS.

Script everything, instance creation, PostgreSQL, etc. Use Salt. Use a VPC.

Scale up and down as required to meet load. If you're just using them to rent a server, it's really expensive.

PostgreSQL RDS is a managed database instance. Big plus: automatic failover! Big minus: you can't read from the secondary. It's expensive. It's a good place to start.

Sharding

Eventually, you'll run out of write capacity on your master.

postgres-xc is an open source fork of PostgreSQL.

Bucardo provides multi-master write capability.

He talked about custom sharding.

Instagram wrote a nice article about it.

Pooling

Opening a connection is expensive. Use a pooler.

pgbouncer is a pooler.

pgPool II can even do query analysis. However, it has higher overhead and is more complex to configure.

Tools

Monitor everything.

check_postgres.pl is a plugin to monitor PostgreSQL.

pgAdmin III and Navicat are nice clients.

pgbadger is for log analysis. So is pg_stat_statements.

Closing

MVCC works by each tuple having a range of transaction IDs that can see that
tuple.

Failover is annoying to do in the real world. People use HAProxy, some pooler, etc. with some scripting, or they have a human do the failover.

HandyRep is a server-based tool designed to allow you to manage a PostgreSQL "replication cluster", defined as a master and one or more replicas on the same network.

Dagger: A Dependency Injection Framework for Android and Java

Dagger is a new dependency injection framework for Android and Java. I went to a meetup yesterday to learn more about it. These are my notes:

The talk was by Jake Wharton who works at Square.

Every single app has some form of DI. You can do DI even if you're not using a library for doing it. The goal of DI is to separate the behavior of something from its required classes. If you've ever used a constructor to receive stuff, you've done a simple version of DI.

Square used Guice heavily.

Problems with Guice:
Config problems fail at runtime. 
Slow initialization, slow injection, and memory problems.

These are worse on Android. It causes the OS to load all the code for your app at once. This caused their app to take 2 seconds to start.

They called Dagger "Object Graph" initially.

Goals of Dagger:

Static analysis of all dependencies and injections. 
Fail as early as possible--compile time, not runtime.
Eliminate the need to do reflection of methods and annotations at runtime. Reflection in Dalvik is really slow.
Have negligible memory impact.

Jesse Wilson wrote it over the course of 5 weeks. He previously worked on Guice and Dalvik.

Square switched from Guice to Dagger in a fairly short period of time.

The name Dagger refers to "directed acyclic graph".

An ObjectGraph is the central dependency manager and injector.

@Module + @Provides

@Inject

@Singleton

Modules are meant to be composed together.

@Inject is required.

Field injection or constructor injection.

Dependencies can be stored in private final fields.

If you have @Inject, you don't have to say @Provides.

Injected fields cannot be private or final. They can be package protected.

Object graphs can be scoped. One object graph is a superset of another. For instance, you might create a new object graph once the user logs in that contains all of the objects that are require a user object.

Android

The Android platform makes it really hard to test your apps. He showed how they deal with it.

The ObjectGraph is just another object. Hence, you can pass it around like a normal object.

There's one pain point in Dagger. All injection points must be listed on a module. This is used for aggressive static analysis.

Use overrides to facilitate testing.

He showed how to integrate with Gradle.

They have a "Debug Drawer" in their apps. It is hidden in the UI, but lets you configure all sorts of debug settings.

U+2020 is a sample app to show how to do all of this.

Using DI Incorrectly

Do NOT ignore the pattern.

Do NOT make every class use the pattern. Use it for the big things, such as the things that talk to a remote API.

Do NOT store dependencies as static fields.

Other Stuff

The Android docs say not to use DI. That advice is stale. Those complaints don't apply to Dagger.

Dagger is developer and debugger friendly.

The Future

Dagger has been out for 18 months.

They're working on the next major version, version 2.0.

Google is leading the development of the next version.

It won't use any reflection.

Dagger is not Android specific.

They're getting rid of injects lists.

Components encapsulate dependencies.

There will be dedicated annotations to denote scopes.

See squ.re/dagger2.

Questions

They use protocol buffers for their APIs. They use a schema for their APIs.

They have code that can generate a GraphViz file that shows you your dependency graph.

In their apps, they have a network module, an Android module, an app module, etc.

There's an IntelliJ plugin that lets you jump between @Inject and @Provides.

Friday, March 28, 2014

Books: Two Scoops of Django: Best Practices For Django 1.6

I just finished reading the book Two Scoops of Django: Best Practices For Django 1.6. I had already reviewed the previous edition, so I was anxious to see what had changed. In short, I loved it!

It's not an introduction, tutorial, or a reference for Django. In fact, it assumes you've already gone through the Django tutorial, and it occasionally refers you to the Django documentation. Rather, it tells you what you should and shouldn't do to use Django effectively. It's very prescriptive, and it has strong opinions. I've always enjoyed books like that. Best of all, it's only about 400 pages long, and it's very easy to read.

This edition is 100 pages longer than the previous edition, and I really enjoyed the new chapters. It has even more silly drawings and creamy ice cream analogies than the original, and even though I'm lactose intolerant, that made the book a lot of fun to read.

Having read the book cover-to-cover, even though I'm fairly new to Django, I kind of feel like I know what I'm doing at this point--at least a little ;) I was afraid when I started using Django that I wouldn't have that feeling until I had used it for a couple years.

So, if you're using Django, I highly recommend it!

Thursday, February 27, 2014

Python: A Response to Glyph's Blog Post on Concurrency

If you haven't seen it yet, Glyph wrote a great blog post on concurrency called Unyielding. This blog post is a response to that blog post.

Over the years, I've tried each of the approaches he talks about while working at various companies. I've known all of the arguments for years. However, I think his blog post is the best blog post I've read for the arguments he is trying to make. Nice job, Glyph!

In particular, I agree with his statements:

What I hope I’ve demonstrated is that if you agree with me that threading has problematic semantics, and is difficult to reason about, then there’s no particular advantage to using microthreads, beyond potentially optimizing your multithreaded code for a very specific I/O bound workload.
There are no shortcuts to making single-tasking code concurrent. It's just a hard problem, and some of that hard problem is reflected in the difficulty of typing a bunch of new concurrency-specific code.

In this blog post, I'm not really disputing his core message. Rather, I'm just pointing out some details and distinctions.

First of all, it threw me off when he mentioned JavaScript since JavaScript doesn't have threads. In the browser, it has web workers which are like processes, and in Node, it has a mix of callbacks, deferreds, and yield. However, reading his post a second time, all he said was that JavaScript had "global shared mutable state". He never said that it had threads.

The next thing I'd like to point out is that there are some real readability differences between the different approaches. Glyph did a good job of arguing that it's difficult to reason about concurrency when you use threads. However, if you ignore race conditions for a moment: I think it's certainly true that threads, explicit coroutines, and green threads are easier to read than callbacks and deferreds. That's because they let you write code in a more traditional, linear fashion. Even though I can do it, using callbacks and deferreds always cause my brain to hurt ;) Perhaps I just need more practice.

Another thing to note is that the type of application matters a lot when you need to address concurrency concerns. For instance, if you're building a UI, you don't want any computationally heavy work to be done on the UI thread. For instance, in Android, you do as little CPU heavy and IO heavy work as possible on the UI thread, and instead push that work off into other threads.

Other things to consider are IO bound vs. CPU bound, stateful vs. stateless.

Threads are fine, if all of the following are true:

  • You're building a stateless web app.
  • You're IO bound.
  • All mutable data is stored in a per-request context object, in per-request instances, or in thread-local storage.
  • You have no module-level or class-level mutable data.
  • You're not doing things like creating new classes or modules on the fly.
  • In general, threads don't interact with each other.
  • You keep your application state in a database.

Sure there's always going to be some global, shared, mutable data such as sys.modules, but in practice Python itself protects that using the GIL.

I've built apps such as the above in a multithreaded way for years, and I've never run into any race conditions. The difference between this sort of app and the app that lead to Glyph's "buggiest bug" is that he was writing a very stateful application server.

I'd also like to point out that it's important to not overlook the utility of UNIX processes. Everyone knows how useful the multiprocessing module is and that processes are the best approach in Python for dealing with CPU bound workloads (because you don't have to worry about the GIL).

However, using a pre-fork model is also a great way of building stateless web applications. If you have to handle a lot of requests, but you don't have to handle a very large number simultaneously, pre-forked processes are fine. The upside is that the code is both easy to read (because it doesn't use callbacks or deferreds), and it's easy to reason about (because you don't have the race conditions that threads have). Hence, a pre-fork model is great for programmer productivity. The downside is that each process can eat up a lot of memory. Of course, if your company makes it to the point where hardware efficiency costs start outweighing programmer efficiency costs, you have what I like to call a "nice to have problem". PHP and Ruby on Rails have both traditionally used a pre-fork approach.

I'm also a huge fan of approaches such as Erlang that give you what is conceptually a process, without the overhead of real UNIX processes.

As Glyph hinted at, this is a really polarizing issue, and there really are no easy, perfect-in-every-way solutions. Any time concurrency is involved, there are always going to be some things you need to worry about regardless of which approach to concurrency you take. That's why we have things like databases, transactions, etc. It's really easy to fall into religious arguments about the best approach to concurrency at an application level. I think it's really helpful to be frank and honest about the pros and cons of each approach.

That being said, I do look forward to one day trying out Guido's asyncio module.

See also:

Thursday, January 30, 2014

Python: A lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest

virtualenv

virtualenv is a tool for installing Python packages locally (i.e. local to a particular project) instead of globally. Here's how to get everything setup:

# Make sure you're using the version of Python you want to use.
which python

sudo easy_install -U setuptools
sudo easy_install pip
sudo pip install virtualenv

Now, let's setup a new project:

mkdir ~/Desktop/sfpythontesting
cd ~/Desktop/sfpythontesting
virtualenv env

# Do this anytime you want to work on the application.
. env/bin/activate

# Make sure that pip is running from within the env.
which pip

pip install nose
pip install mock
pip freeze > requirements.txt

# Now that you've created a requirements.txt, other people can just run:
# pip install -r requirements.txt

nose

Nose is a popular Python testing library. It simple and powerful.

Create a file, ~/Desktop/sfpythontesting/sfpythontesting/main.py with the following:

import random

def sum(a, b):
  return a + b

Now, create another file, ~/Desktop/sfpythontesting/tests/test_main.py with the following:

from nose.tools import assert_equal, assert_raises
import mock

from sfpythontesting import main

def test_sum():
  assert_equal(main.sum(1, 2), 3)

To run the tests:

nosetests --with-doctest

Testing a function that raises an exception

Add the following to main.py:

def raise_an_exception():
  raise ValueError("This is a ValueError")

And the following to test_main.py:

def test_raise_an_exception():
  with assert_raises(ValueError) as context:
    main.raise_an_exception()
  assert_equal(str(context.exception), "This is a ValueError")

Your tests should still be passing.

Monkeypatching

Sometimes there are parts of your code that are difficult to test because they involve randomness, they are time dependent, or they involve external things such as third-party web services. One approach to solving this problem is to use a mocking library to mock out those sorts of things:

Add the following to main.py:

def make_a_move_with_mock_patch():
  """Figure out what move to make in a hypothetical game.

  Use random.randint in part of the decision making process.

  In order to test this function, you have to use mock.patch to monkeypatch random.randint.

  """
  if random.randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

Now, add the following to test_main.py. This code dynamically replaces random.randint with a mock (that is, a fake version) thereby allowing you to make it return the same value every time.

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_attack(randint_mock):
  randint_mock.return_value = 0
  assert_equal(main.make_a_move_with_mock_patch(), "Attack!")

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_defend(randint_mock):
  randint_mock.return_value = 1
  assert_equal(main.make_a_move_with_mock_patch(), "Defend!")

Your tests should still be passing.

Here's a link to a more detailed article on the mock library.

Dependency injection

Another approach to this same problem is to use dependency injection. Add the following to main.py:

def make_a_move_with_dependency_injection(randint=random.randint):
  """This is another version of make_a_move.

  Accept the randint *function* as a parameter so that the test code can inject a different
  version of the randint function.

  This is known as dependency injection.

  """
  if randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

And add the following to test_main.py. Instead of letting make_a_move_with_dependency_injection use the normal version of randint, we pass in our own special version:

def test_make_a_move_with_dependency_injection_can_attack():
  def randint(a, b): return 0
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Attack!")

def test_make_a_move_with_dependency_injection_can_defend():
  def randint(a, b): return 1
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Defend!")

To learn more about dependency injection in Python, see this talk by Alex Martelli.

Since monkeypatching and dependency injection can solve similar problems, you might be wondering which one to use. This turns out to be sort of a religious argument akin to asking whether you should use Vi or Emacs. Personally, I recommend using a combination of PyCharm and Sublime Text ;)

My take is to use dependency injection when you can, but fall back to monkeypatching when using dependency injection becomes impractical. I also recommend that you not get bent out of shape if someone disagrees with you on this subject ;)

doctest

One benefit of using nose is that it can automatically support a wide range of testing APIs. For instance, it works with the unittest testing API as well as its own testing API. It also supports doctests which are tests embedded inside of the docstrings of normal Python code. Add the following to main.py:

def hello_doctest(name):
  """This is a Hello World function for using Doctest.

  >>> hello_doctest("JJ")
  'Hello, JJ!'

  """
  return "Hello, %s!" % name

Notice the docstring serves as both a useful example as well as an executable test. Doctests have fallen out of favor in the last few years because if you overuse them, they can make your docstrings really ugly. However, if you use them to make sure your usage examples keep working, they can be very helpful.

Conclusion

Ok, there's my lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest. Obviously I've only just scratched the surface. However, hopefully I've given you enough to get started!

As I mentioned above, people tend to have really strong opinions about the best approaches to testing, so I recommend being pragmatic with your own tests and tolerant of other people's strong opinions on testing. Furthermore, testing is a skill, kind of like coding Python is a skill. To get really good at it, you're going to need to learn a lot more (perhaps by reading a book) and practice. It'll get easier with time.

If you enjoyed this blog post, you might also enjoy my other short blog post, The Zen of Testing. Also, here's a link to the code I used above.

Tuesday, January 14, 2014

Interesting Computer Failures from the Annals of History

If you're young enough to not know what "Halt and Catch Fire", "Killer poke", and "lp0 on fire" are, here's a fun peek at some of the more interesting computer failures, failure modes, and failure messages from the annals of computer history:

Thanks go to Chris Dudte for first introducing me to "Halt and Catch Fire" ;)

Thursday, January 09, 2014

Humor: More Knuth Jokes

When Knuth implements tail call optimization, it's actually faster than iteration.

All of Knuth's loops terminate...with extreme prejudice.

The NSA is permanently parked outside of Knuth's house, hoping that he might help them crack public key encryption. Sometime last year, Knuth gave them a copy of "The Art of Computer Programming", but refused to tell them which page the algorithm was on.

Knuth taught a group of kids how to use their fingers as abacuses. It turns out that his method is Turing Complete.

Python: My Notes from Yesterday's SF Python Meetup

Here are my notes from yesterday's SF Python Meetup:

Embed Curl

John Sheehan.

embedcurl.com

It creates a pretty version of a curl command and the output. You can embed it in your site.

shlex is a module in Python to do simple lexical analysis.

1-Click Deployment with Launch and Docker

Nate Aune @natea

There are 10 million repos on GitHub. The curve is exponential.

appsembler.com

It launches a Docker container.

It makes it easy to deploy certain types of apps.

You can embed a widget on your app that says "Launch demo site".

docker.io (see also docker.io/learn_more/)

He talked about Containers vs. VMs.

Containers share the OS, so they launch very quickly.

You can create new containers, and each container is just a diff of another container, so it uses very little space.

Yelp: Building a Python Service Stack

Julian Krause, John Billings

They're moving toward a Service Oriented Architecture.

There are over 100 engineers at Yelp.

They have about 180k lines of code in a Python webapp called yelp-main.

This has increased the amount of time to come up to speed and release new features.

They're splitting their large codebase into a lot of little Python codebases that speak HTTP/REST.

Example: metrics = json.loads(urllib2.urlopen(url).read())

They were using Tornado 1.

"Global dependencies considered harmful."

They couldn't upgrade to Tornado 2 because there were too many dependencies on Tornado 1.

They're using virtualenv now.

He thinks that virtualenv's bin/activate is doing it wrong. It should work slightly differently.

I mentioned that one of the problems he was trying to solve could be solved by PEX.

Future directions for isolation: Docker.

They're using pip.

wheel is a built-package format for Python.

pip install -r requirements.txt

Always use specific versions in your requirements.txt. Use ==.

Originally, they were using git submodules. They're not great.

They have separate repos for everything, and they release libraries for everything. They have a tool that monitors git tags.

They use Jenkins.

They use pypiserver.

They're switching from Tornado to Pyramid. It's been a successful migration.

There were issues in Tornado including testing.

Application servers: gunicorn, modwsgi, Circus, Pylons/waitress, and uWSGI

They evaluated all of them and picked uWSGI. It's working well. It's stable. It's fast. A lot of it is written in C. It has good documentation. They can integrate the logging with Scribe. The community is good. They have proper rolling restarts for their Java apps. uWSGI has hot reloading.

Metrics, metrics, metrics!

What is the 99th percentile time for this endpoint?

Are all service instances slow, or is it just one?

How many QPS is this endpoint handling?

Which downstream service is killing our performance?

Are any clients still using the old API?

Did the new service version introduce a performance regression?

They use a Metrics package for their Java code.

They wrote a package for Python called uwsgi_metrics. It's not open source yet, but they'll open source it shortly.

Example: with uwsgi_metrics.timing('foo'): ...

They have a JSON endpoint on all their services that exposes metrics.

uwsgi uses a prefork worker model.

uwsgi has mule processes. They're processes that don't handle interactive traffic, but you can use them as worker processes.

They measured a 50us overhead for recording a metric. You don't want to do this for too many metrics. 10s of metrics is okay. 1000s of metrics isn't.

airbnb/nerve is a service registration daemon that performs health checks.

airbnb/synapse is a transparent service discovery framework for connecting an SOA.

He showed service registration using Nerve. It sends stuff to ZooKeeper.

ZooKeeper is a highly available key value store with nice consistency guarantees.

They use HAProxy.

ZooKeeper -> Synapse -> HAProxy

Client -> HAProxy -> Service hosts

They have an operations dashboard.

There's no static configuration. If a service is running, it appears in the dashboard.

They're not using Smart Stack in production yet.

They have a service called Service Docs. When you build a service, all the docs get put on this website, keyed by service name.

People almost always end up writing client libraries for services. If you don't write one up front, you'll end up writing one implicitly anyway.

The nerve thing should be running on the same machines as the services.

They use memcache within services. They don't yet put caches in front of services.

They're thinking about putting Nginx in front of their services to add a little HTTP caching.

They're still investigating security between services.

WTF is PEX

Brian Wickman.

This is the first time Brian has really formally announced PEX.

This is a shortened version of an internal talk he gave, so I'm not going to take notes for everything he said.

You can create a __main__.py, and then run "python .", and it'll work.

pip search twitter.common

pip install twitter.common.python

This gives you the pex command.

A .pex is a ZIP file containing Python code that's executable. It's used to "compile" your Python projects down to a single file.

You can also use it to create a file that acts like a Python interpreter with all the requirements bundled ahead of time into it.

pex -r flask -p flex.pex

./flask.pex hello_world.py

You can use PEX to easily create self-contained Python applications.

Twitter uses pants. It's a build tool.

Aurora is a service scheduler built on top of Mesos.

Download Aurora to see an example of something that uses pants.

Pants builds modularity into your monorepo.

pants is like blaze at Google.

Pants is multi-interpreter and multi-platform. The pex files work on multiple platforms.

Aurora is half Java and half Python.

A .pex file is similar to a Java .war file.

Thursday, January 02, 2014

Shooting a Screencast that Shows an IDE

Many moons ago, I had to record a screencast. Most of the screencast was spent looking at code in an IDE. I wanted the IDE to be fullscreen, but I also wanted the text to be readable, even when the viewer wasn't watching fullscreen. Furthermore, I didn't want to spend all day zooming in on the cursor; I wanted things to "just work". After playing around with settings way too much, this is what worked for me:

  • I used Camtasia for Mac.
  • I plugged my laptop into my TV (using an HDMI cable) and configured the screen to be 720p. That's the only easy way I know of to get the screen to be exactly 720p.
  • I recorded the video at 720p. Hence, what was in the video matched 1 to 1 with what was on my screen.

Here's a link to the original video. The video is quite viewable in a normal YouTube window, but if you go fullscreen, it looks even better. The text is very crisp once YouTube switches over to the 720p version, but it's still readable even in lower bandwidth environments.