Skip to content

Commit

Permalink
Minor doc improvements. One more demo/test.
Browse files Browse the repository at this point in the history
  • Loading branch information
Leo Dirac committed Oct 23, 2019
1 parent 0af4e1b commit 38752f3
Show file tree
Hide file tree
Showing 4 changed files with 52 additions and 17 deletions.
17 changes: 10 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# timebudget
### A simple tool to see what's slow in your python program
### A stupidly-simple tool to see where your time is going in Python programs

Trying to figure out where the time's going in your python code? Tired of writing `elapsed = time.time() - start_time`? You can find out with just a few lines of code after you

Expand All @@ -13,7 +13,7 @@ pip install timebudget
from timebudget import timebudget
timebudget.report_atexit() # Generate report when the program exits

@timebudget # Measure how long this function takes
@timebudget # Record how long this function takes
def possibly_slow():
...

Expand All @@ -26,12 +26,12 @@ And now when you run your program, you'll see how much time was spent in each an

```
timebudget report...
possibly_slow: 600.62ms for 3 execs
should_be_fast: 300.35ms for 2 execs
possibly_slow: 901.12ms for 3 execs
should_be_fast: 61.35ms for 2 execs
```


## More advanced useage
## Slightly more advanced useage

You can wrap specific blocks of code to be measured, and give them a name:

Expand All @@ -54,7 +54,8 @@ If you are doing something repeatedly, and want to know the percent of time doin
```python
@timebudget
def outer_loop():
possibly_slow()
if sometimes():
possibly_slow()
should_be_fast()
should_be_fast()

Expand All @@ -66,10 +67,12 @@ Then the report looks like:
```
timebudget report per outer_loop cycle...
outer_loop: 100.0% 440.79ms/cyc @ 1.0execs/cyc
possibly_slow: 40.9% 180.31ms/cyc @ 3.0execs/cyc
possibly_slow: 40.9% 180.31ms/cyc @ 0.6execs/cyc
should_be_fast: 13.7% 60.19ms/cyc @ 2.0execs/cyc
```

Here, the times in milliseconds are the totals (averages per cycle), not the average time per call. So in the above example, `should_be_fast` is taking about 30ms per call, but being called twice per loop. Similarly, `possibly_slow` is still about 300ms each time it's called, but it's only getting called on 60% of the cycles on average.


## Requirements

Expand Down
22 changes: 12 additions & 10 deletions demo3.py
Original file line number Diff line number Diff line change
@@ -1,27 +1,29 @@
import random
import time
from timebudget import timebudget

# More or less what's in the README for loops
# but 10x faster so we can test it 100x and get the averages about right.

@timebudget
def possibly_slow():
print('slow', end=' ', flush=True)
time.sleep(0.06)
time.sleep(0.03)

def sometimes():
return random.random() < 0.6

@timebudget
def should_be_fast():
print('quick', end=' ', flush=True)
time.sleep(0.03)
time.sleep(0.003)

@timebudget
def outer_loop():
possibly_slow()
possibly_slow()
if sometimes():
possibly_slow()
should_be_fast()
should_be_fast()
possibly_slow()
time.sleep(0.2)
print("dance!")

for n in range(7):
for n in range(100):
outer_loop()

timebudget.report('outer_loop')
27 changes: 27 additions & 0 deletions demo4.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import time
from timebudget import timebudget

@timebudget
def possibly_slow():
print('slow', end=' ', flush=True)
time.sleep(0.06)

@timebudget
def should_be_fast():
print('quick', end=' ', flush=True)
time.sleep(0.03)

@timebudget
def outer_loop():
possibly_slow()
possibly_slow()
should_be_fast()
should_be_fast()
possibly_slow()
time.sleep(0.2)
print("dance!")

for n in range(7):
outer_loop()

timebudget.report('outer_loop')
3 changes: 3 additions & 0 deletions test_demos.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,6 @@ def test_demo2():

def test_demo3():
import demo3

def test_demo4():
import demo4

0 comments on commit 38752f3

Please sign in to comment.