Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] Fix multiple send recv #320

Merged
merged 25 commits into from
Dec 24, 2018

Conversation

lingfanyu
Copy link
Collaborator

@lingfanyu lingfanyu commented Dec 17, 2018

Description

This PR fixes #75 and other small bugs (see below) and supports multiple recv after send.

Previously, each recv call clears message graph so following recv won't see any message. This PR fix the bug by removing message graph and put message in edge space (but still in a separate message frame). Now message frame has the same size as edge frame and whether there is a message on an edge is indicated by a special "boolean" index in DGLGraph (graph._msg_index).

Now users can:

  • call multiple send (on the edges or not), each using a different message function (with different message fields)
  • call multiple recv

Checklist

  • The PR title starts with [$CATEGORY] (such as [Model], [Doc], [Feature]])
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage
  • Code is well-documented
  • To the my best knowledge, examples are either not affected by this change,
    or have been fixed to be compatible with this change
  • Related issue is referred in this PR

Changes

  • fix multiple send (using different fields)
  • fix multiple recv and remove message graph
  • fix frame append bug when some feature is not in the new frame
  • fix bug in send when edges is ALL

python/dgl/frame.py Outdated Show resolved Hide resolved
python/dgl/frame.py Show resolved Hide resolved
python/dgl/graph.py Outdated Show resolved Hide resolved
python/dgl/graph.py Outdated Show resolved Hide resolved
python/dgl/runtime/ir/executor.py Outdated Show resolved Hide resolved
python/dgl/runtime/ir/executor.py Outdated Show resolved Hide resolved
python/dgl/runtime/scheduler.py Outdated Show resolved Hide resolved
python/dgl/utils.py Show resolved Hide resolved
# XXX: assuming tensor supports index using slices
# cannot call tousertensor because for slice type it would
# further call tonumpy, which overwrites _pydata and causes
# future bugs...
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this mean?

Copy link
Collaborator Author

@lingfanyu lingfanyu Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally when I implemented get_items and set_items, I write

if isintance(index, Index):
    index = index.tousertensor()

And I just found this is buggy, because, if index._pydata is a slice, index.tousertensor will can tonumpy, which overwrites _pydata. As a result, index._pydata becomes a numpy.ndarray... And this can potentially cause bugs

if apply_func is not None:
schedule_apply_nodes(graph, recv_nodes, apply_func, inplace)
else:
src, dst, _ = graph._graph.find_edges(eid)
Copy link
Member

@jermainewang jermainewang Dec 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lingfan: Mistakenly removed your comment...

Copy link
Collaborator Author

@lingfanyu lingfanyu Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right about using mask on src, dst to avoid find_edge

Copy link
Member

@jermainewang jermainewang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LTGM. Minor comments.

python/dgl/utils.py Outdated Show resolved Hide resolved
@jermainewang jermainewang merged commit 2664ed2 into dmlc:master Dec 24, 2018
@lingfanyu lingfanyu deleted the fix-multi-send-recv branch December 24, 2018 02:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Consecutive sends without recv
3 participants