Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimise shaders #1381

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

AnyOldName3
Copy link
Contributor

As discussed here #1327

Unfortunately, on my conveniently-available hardware (AMD running Windows), it doesn't seem to make any difference. It's definitely changing the bytecode that's sent to the driver (I checked that thoroughly after an earlier revision of this work turned out to only actually be optimising the first shader it was given)

I don't know whether that's because:

  • AMD's Windows driver already optimises SPIR-V bytecode at least this well before running it. This is the most plausible explanation to me, though.
  • The VSG's stock shader isn't particularly amenable to optimisation.
  • I've not tested it on anything where shader execution (except unavoidable bits) takes a meaningful amount of time.

It would make a degree of sense for other people to test this on other machines - I'd be unsurprised if embedded platforms didn't bother optimising the SPIR-V they're given, but I don't have any such hardware myself.

As this seems to accomplish nothing, there might be disadvantages to merging it:

  • In general, it can be harmful to run optimisations on something that's already been optimised. If desktop drivers are always already optimising shaders, and doing so under the assumption that they're straight from glslang, then this could reduce framerates.
  • It's more complicated than not doing any of this - I hit enough situations where shader optimisation failed due to upstream bugs that I felt it was necessary to have a fallback instead of crashing.

Those disadvantages would be mitigated by making the default value of optimize false instead of true and by not providing optimised versions of the precompiled shaders (or maybe even an optimised version and an unoptimised version).

Unfortunately, on my conveniently-available hardware (AMD running Windows), it doesn't seem to make any difference.
It's definitely changing the bytecode that's sent to the driver (I checked that thoroughly after an earlier revision of this work turned out to only actually be optimising the first shader it was given)

I don't know whether that's because:
* AMD's Windows driver already optimises SPIR-V bytecode at least this well before running it. This is the most plausible explanation to me, though.
* The VSG's stock shader isn't particularly amenable to optimisation.
* I've not tested it on anything where shader execution (except unavoidable bits) takes a meaningful amount of time.

It would make a degree of sense for other people to test this on other machines - I'd be unsurprised if embedded platforms didn't bother optimising the SPIR-V they're given, but I don't have any such hardware myself.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant