Performance¶
This topic covers the runtime performance impact of PyLocket protection and strategies for optimization.
Overhead Summary¶
PyLocket adds minimal runtime overhead. The impact depends on your application's workload characteristics:
- I/O-bound applications see negligible impact — function decryption time is measured in microseconds, while I/O operations are measured in milliseconds
- CPU-bound applications see a small, single-digit percentage overhead proportional to function call frequency
- GUI applications see imperceptible impact — event handlers are cached after first invocation and UI rendering is handled by the framework
Workload Characteristics¶
I/O-Bound Applications¶
Applications that spend most time on I/O (network, disk, database) see negligible impact:
- Function decryption time is measured in microseconds
- I/O operations are measured in milliseconds
- The overhead is invisible relative to I/O latency
Examples: Web servers, API clients, file processors, database tools
CPU-Bound Applications¶
Applications with tight computational loops see measurable but small impact:
- The overhead is proportional to function call frequency
- Code within a single function has zero overhead (PyLocket protects at the function boundary)
- Large functions that execute for a long time have proportionally less overhead
Examples: Data processing, numerical computation, image processing
GUI Applications¶
Desktop GUI applications are almost always I/O-bound (waiting for user input, rendering):
- PyLocket overhead is imperceptible to users
- Event handlers are typically cached after first invocation
- UI rendering is handled by the framework (not protected Python code)
Caching¶
PyLocket's native runtime includes a built-in cache for recently decrypted functions. Frequently called functions are decrypted once and served from cache on subsequent calls, significantly reducing the amortized cost of protection.
For typical applications, hot paths achieve very high cache hit rates, resulting in near-zero effective overhead for the most performance-sensitive code paths.
Optimization Strategies¶
1. Consolidate Hot Functions¶
If you have a tight loop calling many small functions, consider consolidating them:
# Before: Many small functions (more overhead)
def step_a(x):
return x + 1
def step_b(x):
return x * 2
def process(data):
for item in data:
item = step_a(item)
item = step_b(item)
# After: One larger function (less overhead)
def process(data):
for item in data:
item = (item + 1) * 2
2. Use C Extensions for Hot Paths¶
C extension modules (.pyd, .so) are not modified by PyLocket. If you have performance-critical numerical code, consider using:
- NumPy / SciPy operations
- Cython-compiled modules
- ctypes / cffi calls to C libraries
These execute at native speed with zero PyLocket overhead.
3. Profile Before and After¶
Always measure the actual impact on your specific application:
import time
start = time.perf_counter()
# Your workload
elapsed = time.perf_counter() - start
print(f"Elapsed: {elapsed:.3f}s")
Compare unprotected vs. protected execution times to quantify the real-world impact for your use case.
Summary¶
- For most applications, PyLocket's performance impact is imperceptible
- I/O-bound workloads see effectively zero impact
- CPU-bound workloads see a small, single-digit percentage overhead
- The built-in cache significantly reduces amortized cost for hot paths
- C extensions execute at full native speed (not protected by PyLocket)
See Also¶
- Protection — What gets protected
- How Protection Works — Protection pipeline overview