# does python iterate at a constant speed?

• Last Update :
• Techknowledgy :

```def sortby(somelist, n):
nlist = [(x[n], x) for x in somelist]
nlist.sort()
return [val
for (key, val) in nlist
]```

```def sortby_inplace(somelist, n):
somelist[: ] = [(x[n], x) for x in somelist]
somelist.sort()
somelist[: ] = [val
for (key, val) in somelist
]
return```

```>>> somelist = [(1, 2, 'def'), (2, -4, 'ghi'), (3, 6, 'abc')] >>>
somelist.sort() >>>
somelist[(1, 2, 'def'), (2, -4, 'ghi'), (3, 6, 'abc')] >>>
nlist = sortby(somelist, 2) >>>
sortby_inplace(somelist, 2) >>>
nlist == somelist
True
>>>
nlist = sortby(somelist, 1) >>>
sortby_inplace(somelist, 1) >>>
nlist == somelist
True```

```s = ""
for substring in list:
s += substring```

Suggestion : 2

Rule number one: only optimize when there is a proven speed bottleneck. Only optimize the innermost loop. (This rule is independent of Python, but it doesn't hurt repeating it, since it can save a lot of work. :-) ,And last but not least: collect data. Python's excellent profile module can quickly show the bottleneck in your code. if you're considering different versions of an algorithm, test it in a tight loop using the time.clock() function. ,If you feel the need for speed, go for built-in functions - you can't beat a loop written in C. Check the library manual for a built-in function that does what you want. If there isn't one, here are some guidelines for loop optimization: ,This version performs exactly the same set of string operations as the first one, but gets rid of the for loop overhead in favor of the faster, implied loop of the reduce() function.

The first version I came up with was totally straightforward:

```    def f1(list):
string = ""
for item in list:
string = string + chr(item)
return string```

```    def f2(list):
return reduce(lambda string, item: string + chr(item), list, "")```

Hmm, said my friend. I need this to be faster. OK, I said, how about this version:

```    def f3(list):
string = ""
for character in map(chr, list):
string = string + character
return string```

There's a general technique to avoid quadratic behavior in algorithms like this. I coded it as follows for strings of exactly 256 items:

```    def f5(list):
string = ""
for i in range(0, 256, 16): # 0, 16, 32, 48, 64, ...
s = ""
for character in map(chr, list[i: i + 16]):
s = s + character
string = string + s
return string```

Finally, I tried a radically different approach: use only implied loops. Notice that the whole operation can be described as follows: apply chr() to each list item; then concatenate the resulting characters. We were already using an implied loop for the first part: map(). Fortunately, there are some string concatenation functions in the string module that are implemented in C. In particular, string.joinfields(list_of_strings, delimiter) concatenates a list of strings, placing a delimiter of choice between each two strings. Nothing stops us from concatenating a list of characters (which are just strings of length one in Python), using the empty string as delimiter. Lo and behold:

```    import string
def f6(list):
return string.joinfields(map(chr, list), "")```

Suggestion : 3

Repeated execution of a set of statements is called iteration. Because iteration is so common, Python provides several language features to make it easier. We’ve already seen the for statement in chapter 3. This the the form of iteration you’ll likely be using most often. But in this chapter we’ve going to look at the while statement — another way to have your program do iteration, useful in slightly different circumstances.,We call the first case definite iteration — we know ahead of time some definite bounds for what is needed. The latter case is called indefinite iteration — we’re not sure how many iterations we’ll need — we cannot even establish an upper bound!,Encapsulation is the process of wrapping a piece of code in a function, allowing you to take advantage of all the things functions are good for. You have already seen some examples of encapsulation, including is_divisible in a previous chapter.,This is a control flow statement that causes the program to immediately skip the processing of the rest of the body of the loop, for the current iteration. But the loop still carries on running for its remaining iterations:

```1
2
3
4```
```airtime_remaining = 15
print(airtime_remaining)
airtime_remaining = 7
print(airtime_remaining)```
```15
7```
```1
2
3```
```a = 5
b = a # After executing this line, a and b are now equal
a = 3 # After executing this line, a and b are no longer equal```
```1
2```

Suggestion : 4

Last Updated : 12 Jul, 2021,GATE CS 2021 Syllabus

Output:

```k
s
e
g
E```

Output:

```0.06303901899809716
0.06756918999963091
0.06692574200133095
0.067220498000097
0.06748137499744189```

Output

```0.1306622320007591
0.13657568199778325
0.13797824799985392
0.1386374360008631
0.1424286179972114```

Suggestion : 5

September 12, 2020

Let’s look at an example.

```# Decorate using @tf.function
import tensorflow as tf
import time
from datetime
import datetime

@tf.function
def
function(x):
a = tf.constant([
[2.0],
[3.0]
])
b = tf.constant(4.0)
return a + b```

You can see that we have used the `@tf.function` decorator. This means that a graph for this function has been created. Let’s test it by calling the function with some input and then visualising it using Tensorboard.

```# Plot a graph
for
function() using Tensorboard
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)

tf.summary.trace_on(graph = True, profiler = True)
# Call only one tf.function when tracing.
z = function(2)
with writer.as_default():
tf.summary.trace_export(
name = "function_trace",
step = 0,
profiler_outdir = logdir)```
```% load_ext tensorboard
%
tensorboard--logdir logs / func```

Output:

```array([
[8.]
], dtype = float32)```

Let’s look at this speed up by observing the code-run time for a code as it is and then with `tf.function` decorator.

```class SequentialModel(tf.keras.Model):
def __init__(self, ** kwargs):
super(SequentialModel, self).__init__( ** kwargs)
self.flatten = tf.keras.layers.Flatten(input_shape = (28, 28))
self.dense_1 = tf.keras.layers.Dense(128, activation = "relu")
self.dropout = tf.keras.layers.Dropout(0.2)
self.dense_2 = tf.keras.layers.Dense(10)

def call(self, x):
x = self.flatten(x)
x = self.dense_1(x)
x = self.dropout(x)
x = self.dense_2(x)
return x

input_data = tf.random.uniform([60, 28, 28])

eager_model = SequentialModel()
graph_model = tf.function(eager_model)

print("Eager time:", timeit.timeit(lambda: eager_model(input_data), number = 10000))
print("Graph time:", timeit.timeit(lambda: graph_model(input_data), number = 10000))```