Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GraphBolt] modify logic for HeteroItemSet indexing #7428

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

Skeleton003
Copy link
Collaborator

@Skeleton003 Skeleton003 commented May 27, 2024

Description

First let's take a look at the current code for indexing a HeteroItemSet (occurs in HeteroItemSet.__getitem__):

        elif isinstance(index, Iterable):
            if not isinstance(index, torch.Tensor):
                index = torch.tensor(index)
            assert torch.all((index >= 0) & (index < self._length))
            key_indices = (
                torch.searchsorted(self._offsets, index, right=True) - 1
            )
            data = {}
            for key_id, key in enumerate(self._keys):
                mask = (key_indices == key_id).nonzero().squeeze(1)
                if len(mask) == 0:
                    continue
                data[key] = self._itemsets[key][
                    index[mask] - self._offsets[key_id]
                ]
            return data

Say the length of indices is N and the number of etypes/ntyeps is K, then the time complexity of current implementation of indexing a dictionary is O(N * K), which is mainly introduced by the line

mask = (key_indices == key_id).nonzero().squeeze(1)

If there are a lot of etypes, this line could easily become the bottleneck.

This draft PR intends to propose an alternative to current logic:

        elif isinstance(index, Iterable):
            if not isinstance(index, torch.Tensor):
                index = torch.tensor(index)
            sorted_index, indices = index.sort()
            assert sorted_index[0] >= 0 and sorted_index[-1] < self._length
            index_offsets = torch.searchsorted(sorted_index, self._offsets)
            data = {}
            for key_id, key in enumerate(self._keys):
                if index_offsets[key_id] == index_offsets[key_id + 1]:
                    continue
                current_indices, _ = indices[
                    index_offsets[key_id] : index_offsets[key_id + 1]
                ].sort()
                data[key] = self._itemsets[key][
                    index[current_indices] - self._offsets[key_id]
                ]
            return data

whose time complexity is O(N * logN) where the log is introduced by the sorting operation.

This will imporve the performance when there are many etypes, but might cause more time consuming when there are few etypes. A thoughtful consideration lies in striking a balance between the two approaches.

Update on June 18

Benchmark: https://docs.google.com/document/d/1Bbmp8gMekiGIYYxEMVbmXSANRZlZ_nTNbhpWul4RaKA/edit?usp=sharing

The results show that the original algorithm is faster than the new algorithm (theoretical time complexity N*logN) for almost all batch_size and num_types.

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
  • Related issue is referred in this PR
  • If the PR is for a new model/paper, I've updated the example index here.

Changes

@dgl-bot
Copy link
Collaborator

dgl-bot commented May 27, 2024

To trigger regression tests:

  • @dgl-bot run [instance-type] [which tests] [compare-with-branch];
    For example: @dgl-bot run g4dn.4xlarge all dmlc/master or @dgl-bot run c5.9xlarge kernel,api dmlc/master

@Skeleton003 Skeleton003 changed the title Heteroitemset getitem iter [GraphBolt] modify logic for HeteroItemSet indexing May 27, 2024
@dgl-bot
Copy link
Collaborator

dgl-bot commented May 27, 2024

Commit ID: 6c3a7f2

Build ID: 1

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@Skeleton003 Skeleton003 marked this pull request as ready for review June 12, 2024 22:02
@Skeleton003
Copy link
Collaborator Author

@dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jun 13, 2024

Commit ID: b4c0ea46dfefd4426afc83d6a6098b4495a653b5

Build ID: 2

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin
Copy link
Collaborator

mfbalin commented Jun 17, 2024

Do you have a benchmark comparing the new approach to the old one for different K values?

data[key] = self._itemsets[key][
index[mask] - self._offsets[key_id]
]
if len(index) < self._threshold:
Copy link
Collaborator

@mfbalin mfbalin Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a benchmark before settings such a threshold only by looking at runtime complexity.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I'll do it right away.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use K values 1 2 4 8 16 32 etc.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little bit complex, I think the optimal complexity should be O(N*logK), with additional 2 helper array: offsets = [0, 10, 30, 60], etypes = ["A", "B", "C", "D"]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NVM, I see the points, to do the way I suggested, we need to implement our own C++ kernel with parallel optimization.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is any of the ops used in the new implementation single threaded?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems no. Maybe sorting op itself is a bit slow even it's multi-threaded?

Copy link
Collaborator

@mfbalin mfbalin Jun 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you verify?

Copy link
Collaborator Author

@Skeleton003 Skeleton003 Jun 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, I'm just guessing. Did you find out anything from the benchmarking code?

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jun 17, 2024

Commit ID: 50da718

Build ID: 3

Status: ❌ CI test failed in Stage [Distributed Torch CPU Unit test].

Report path: link

Full logs path: link

@Skeleton003
Copy link
Collaborator Author

@mfbalin @frozenbugs See benchmark results in the description. The new implementation does not seem to be as efficient as we thought. Maybe we should keep it as is?

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jun 18, 2024

Commit ID: 591c122323236759ec8e2df4021308331e93cf6b

Build ID: 4

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin
Copy link
Collaborator

mfbalin commented Jun 18, 2024

@mfbalin @frozenbugs See benchmark results in the description. The new implementation does not seem to be as efficient as we thought. Maybe we should keep it as is?

Let me take a look at the code to see if we missed anything. Thank you for the benchmark.

if len(index) < self._threshold:
# Say N = len(index), and K = num_types.
# If logN < K, we use the algo with time complexity O(N*logN).
sorted_index, indices = index.sort()
Copy link
Collaborator

@mfbalin mfbalin Jun 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we try numpy.argsort here to get indices and index[indices] to get sorted_index? It looks like numpy might have a more efficient sorting implementation. When benchmarking, we should ensure that we have a recent version of numpy installed. It looks like numpy uses this efficient sorting implementation by intel: https://github.com/intel/x86-simd-sort

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is assuming that the sort is the bottleneck for this code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your info!

When benchmarking, we should ensure that we have a recent version of numpy installed.

How recent is the very version? Because it seems that we have just diabled numpy>=2.0.0 in #7479 .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy/numpy#22315
They added faster sort in this PR. Looks like the version number is 1.25 or later.
https://github.com/search?q=repo%3Anumpy%2Fnumpy%20%2322315&type=code

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the latest 1.x version and see how the performance is.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though the code changes were commited in numpy/numpy#22315 where the version is 1.25, this improvement was not officially announced until NumPy 2.0.0 Release Notes. Therefore, it is likely that they did not integrate the changes until version 2.0.0.

I plan to move that we offer full support for numpy>=2.0.0 at the Monday meeting, and perform the benchmark after we do so.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy>=2 is compatible with DGL. I think we can perform this benchmark. I just ran the graphbolt tests with numpy>=2 installed and all passed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the updated numbers. I will profile the benchmark code and see if there is a potential improvement we can do.

continue
current_indices, _ = indices[
index_offsets[key_id] : index_offsets[key_id + 1]
].sort()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use np.sort here as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants