You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was talking to one of the authors of an OOPSLA paper on alias analysis in the context of compiler optimizations today https://dl.acm.org/doi/10.1145/3563316 (fany title, btw) and asked her how she evaluates her things:
They apparently use the CPU Spec 2017 benchmark set that has several C programs with Loc up to >1M. I know we at some point bought Spec 2006 or something, is there a specific reason we are not using these benchmarks these days?
The text was updated successfully, but these errors were encountered:
Both valid points. One could probably even include the tests in supplementary artefacts for the reviewers and then publish one without them. If this is not a dealbreaker at OOPSLA, it likely also won't be in other venues.
On the other hand, having GCC and blender combined into a single file makes it infinitely more easy to analyze. Might well be worth the 50 bucks if you estimate how much time one of us fixing build scripts for hours on end costs us.
I was talking to one of the authors of an OOPSLA paper on alias analysis in the context of compiler optimizations today https://dl.acm.org/doi/10.1145/3563316 (fany title, btw) and asked her how she evaluates her things:
They apparently use the CPU Spec 2017 benchmark set that has several C programs with Loc up to >1M. I know we at some point bought Spec 2006 or something, is there a specific reason we are not using these benchmarks these days?
The text was updated successfully, but these errors were encountered: