Join us
Python is notorious for being a slow programming language. When looping through problems in most fashions in Python, it can be noticeably slower than languages like C.
There are ways to reduce the amount of time it takes to do larger calculations, no matter how simple.
Python is notorious for being a slow programming language. When looping through problems in most fashions in Python, it can be noticeably slower than languages like C.
There are ways to reduce the amount of time it takes to do larger calculations, no matter how simple.
NumPy vs Python
A great tool to use is NumPy. Being familiar with NumPy and it’s benefits can significantly speed up your code. NumPy is written primarily in C which results in it’s execution time beating Python’s significantly. Below you can see an example of using NumPy Arrays vs Native Python Lists.
import numpy
import time
# Create Python Native Lists and Numpy Arrays
python_list1 = range(50000000)
python_list2 = range(50000000)
numpy_array1 = numpy.arange(50000000)
numpy_array2 = numpy.arange(50000000)
# Time our Native Python Lists
start_time = time.time()
python_list = [(x * y) for x, y in zip(python_list1, python_list2)]
print(f"Python Native Lists Runtime : {time.time() - start_time} seconds")
# Time our NumPy Arrays
start_time = time.time()
numpy_array = numpy_array1 * numpy_array2
print(f"Time taken by NumPy Arrays : {time.time() - start_time} seconds")
This yields the following results
Python Native Lists Runtime : 2.6651768684387207 seconds
Time Taken by NumPy Arrays : 0.062390804290771484 seconds
As we can see, NumPy arrays are significantly faster! NumPy Arrays beat Python’s built-in lists by 2.6 seconds! While 2.6 seconds may not seem like a huge difference, in programming it is a HUGE gap for an operation this small.
Let’s look at another example of how we can make Python math more efficient.
Python Caching
If you are a new Python developer, or developer in general, you may not have heard of caches or caching before. Your computers have what is reffered to as a cache, which is a storage for temporary files. Typically this cache is stored somewhere in your computers memory.
Python makes it very easy to cache answers to a function. This can drastically reduce the amount of time functions take to complete. In this example we will use a fibonacci sequence.
Let’s view the code without any caching.
import time
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
start_time = time.time()
for i in range(40):
print(i, fib(i))
print(f"It took {time.time() - start_time} seconds.")
This unsuprisingy yields the following result:
It took 42.54846906661987 seconds.
Using the module functools to enable function caching we can drastically reduce this time.
All we have to do is import lru_cache from functools, then add the decorator above the fib function, with a maxsize of None. Remember to always read the documentation when using a new module, maxsize of None may not be the best idea for your setup. Let’s see what this code looks like:
from functools import lru_cache
import time
@lru_cache(maxsize = None)
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
start_time = time.time()
for i in range(40):
print(i, fib(i))
print(f"It took {time.time() - start_time} seconds.")
This yield a MUCH quicker result of:
It took 0.01403188705444336 seconds.
In Python, there are always a lot of roads to your destination. As we all further our Python skills it is important to learn efficient ways to design our programs. Your future developer self will thank you!
Join other developers and claim your FAUN account now!
Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.