What Is New in Python 3.2
Python 3.2 was released on February 20, 2011. Key additions include the concurrent.futures module for high-level thread and process pools, improved argparse as the successor to optparse, functools.lru_cache as a memoization decorator, and significant improvements to the ssl module. The __pycache__ directory system also debuted here, keeping compiled bytecode organized.
| Category | Change | PEP / Reference |
|---|---|---|
| New Modules | concurrent.futures -- ThreadPoolExecutor, ProcessPoolExecutor |
PEP 3148 |
| Standard Library | functools.lru_cache -- memoization decorator |
-- |
| Standard Library | argparse replaces optparse as the recommended CLI parser |
PEP 389 |
| Interpreter | __pycache__ directory for compiled bytecode files |
PEP 3147 |
| Security | Major SSL overhaul -- hostname checking, SNI, certificate verification | -- |
| Standard Library | html module with html.escape() |
-- |
| Standard Library | datetime.timedelta total_seconds() method |
-- |
| Standard Library | Improved logging -- logging.config.dictConfig() |
-- |
| Performance | Faster Unicode I/O; io module rewritten in C |
-- |
concurrent.futures -- High-Level Concurrency (PEP 3148)
The concurrent.futures module provides a simple, unified API for running work in threads or processes without managing pools manually. ThreadPoolExecutor for I/O-bound work; ProcessPoolExecutor for CPU-bound work that needs to bypass the GIL.
from concurrent.futures import ThreadPoolExecutor, as_completed
import urllib.request
urls = ["https://example.com", "https://python.org"]
def fetch(url):
return urllib.request.urlopen(url).read()
with ThreadPoolExecutor(max_workers=5) as executor:
futures = {executor.submit(fetch, url): url for url in urls}
for future in as_completed(futures):
url = futures[future]
print(f"{url}: {len(future.result())} bytes")
The executor.map() method is the simplest pattern for applying a function to an iterable in parallel. submit() + as_completed() gives you results in completion order rather than submission order.
functools.lru_cache -- Memoization
@functools.lru_cache(maxsize=128) caches the return values of a function based on its arguments. Once the cache reaches maxsize entries, the least recently used entry is evicted. Use maxsize=None for an unbounded cache (functools.cache in 3.9+, which is identical).
from functools import lru_cache
@lru_cache(maxsize=256)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(100) # Completes instantly after first call
fibonacci.cache_info() # CacheInfo(hits=98, misses=101, ...)
__pycache__ and Bytecode Management (PEP 3147)
Compiled .pyc files are now stored in a __pycache__ subdirectory alongside the source file, with the Python version embedded in the filename (e.g., module.cpython-32.pyc). This prevents conflicts when switching between Python versions sharing the same source tree, and avoids permission errors on read-only source directories.
FAQ
When should I use ThreadPoolExecutor vs ProcessPoolExecutor?
Use ThreadPoolExecutor for I/O-bound tasks (network requests, file I/O, database queries) -- threads work well here because the GIL is released during I/O. Use ProcessPoolExecutor for CPU-bound work (image processing, numerical computation) that needs true parallelism. Each process has its own GIL and memory space; inter-process communication has serialization overhead.
Does lru_cache work with unhashable arguments?
No. All arguments must be hashable because they form the cache key. Lists, dicts, and sets are not hashable -- passing them raises a TypeError. For memoizing functions that take sequences, convert to tuples before calling, or use a custom caching approach.
Can I delete __pycache__ directories safely?
Yes. Python regenerates them on next import. Deleting __pycache__ is safe and sometimes necessary when distributing source packages or debugging import issues. It has no effect on runtime behavior other than a small startup delay the first time a module is imported.
Is argparse backward compatible with optparse scripts?
Not directly -- APIs differ. argparse is a clean rewrite with better help formatting, type coercion, subcommand support, and error messages. Migrating optparse scripts requires rewriting the option definitions, but the new API is significantly cleaner. optparse itself was deprecated in 3.2.
What does concurrent.futures.wait() do differently from as_completed()?
wait() blocks until all (or a set number of) futures complete and returns two sets: done and not-done. as_completed() is a generator that yields each future as it finishes, letting you process results progressively. Use as_completed() when you want to act on results immediately; use wait() when you need to know that a batch is complete before proceeding.