You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In below two examples, plan_valuation will generate a graph in a range of dates. If the workers in grid keep the intermediate tasks results, I guess Cell-2 will run very fast, which may take no time. However, It looks like Cell-2 run doesn't leverage the Cell-1 run and its intermediate tasks result in workers. Did I understand dask cache wrongly?
Cell-1
%%time
dates = [datetime.datetime(2018,5,17-n) for n in [0,1,2,3,4,5]]
ts2 = plan_valuation('1234', dates)
t2 = client.persist(ts2)
t2 = client.compute(t2)
res2 = t2.result()
BTW, I checked worker.data dict. It looks like that it will clean all intermediate results after a run. So, later run may see no tasks results. Correct me if I am wrong.
Is there a way to enable some cache sharing between runs?
In below two examples, plan_valuation will generate a graph in a range of dates. If the workers in grid keep the intermediate tasks results, I guess Cell-2 will run very fast, which may take no time. However, It looks like Cell-2 run doesn't leverage the Cell-1 run and its intermediate tasks result in workers. Did I understand dask cache wrongly?
Cell-1
%%time
dates = [datetime.datetime(2018,5,17-n) for n in [0,1,2,3,4,5]]
ts2 = plan_valuation('1234', dates)
t2 = client.persist(ts2)
t2 = client.compute(t2)
res2 = t2.result()
Cell-2
%%time
ts2 = plan_valuation('1234', dates[1:3])
t2 = client.compute(ts2)
res2 = t2.result()
The text was updated successfully, but these errors were encountered: