Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We are displaying display driver info, scope creep #710

Merged
merged 1 commit into from
Feb 2, 2025

Conversation

ericcurtin
Copy link
Collaborator

@ericcurtin ericcurtin commented Feb 2, 2025

We have to be mindful of maintenance of the codebase. Display drivers have nothing to do with AI acceleration. Don't show display info such as "Color LCD" :

{
"Engine": {
"Name": null
},
"GPUs": {
"Detected GPUs": [
{
"Cores": "18",
"GPU": "Apple M3 Pro",
"Metal": "Metal 3",
"Vendor": "Apple (0x106b)"
},
{
"GPU": "Color LCD"
}
],
"INFO": "No errors"
},
"Image": "quay.io/ramalama/ramalama",
"Runtime": "llama.cpp",
"Store": "/Users/ecurtin/.local/share/ramalama",
"UseContainer": false,
"Version": "0.0.19"
}

Not sure about the "macOS detection covers AMD GPUs" code. We could have external GPUs potentially on macOS but even in that case the code seems illogical.

Summary by Sourcery

Refine GPU detection and logging to exclude display adapters and streamline logging setup.

Bug Fixes:

  • Prevent display adapters from being incorrectly identified as GPUs.

Enhancements:

  • Simplify logging configuration.

We have to be mindful of maintenance of the codebase. Display
drivers have nothing to do with AI acceleration. Don't show
display info such as "Color LCD" :

{
    "Engine": {
        "Name": null
    },
    "GPUs": {
        "Detected GPUs": [
            {
                "Cores": "18",
                "GPU": "Apple M3 Pro",
                "Metal": "Metal 3",
                "Vendor": "Apple (0x106b)"
            },
            {
                "GPU": "Color LCD"
            }
        ],
        "INFO": "No errors"
    },
    "Image": "quay.io/ramalama/ramalama",
    "Runtime": "llama.cpp",
    "Store": "/Users/ecurtin/.local/share/ramalama",
    "UseContainer": false,
    "Version": "0.0.19"
}

Not sure about the "macOS detection covers AMD GPUs" code. We could
have external GPUs potentially on macOS but even in that case the
code seems illogical.

Signed-off-by: Eric Curtin <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Feb 2, 2025

Reviewer's Guide by Sourcery

This pull request refactors the GPU detection logic to remove display driver information and correct macOS GPU detection. The changes ensure that only relevant GPU information is included, and the macOS detection logic is more robust.

Sequence diagram for macOS GPU detection flow

sequenceDiagram
    participant C as CLI
    participant D as GPUDetector
    participant S as system_profiler

    C->>D: get_macos_gpu()
    D->>S: system_profiler SPDisplaysDataType
    S-->>D: Raw GPU information
    Note over D: Parse GPU info:
    Note over D: - Chipset Model
    Note over D: - Cores
    Note over D: - Vendor
    Note over D: - Metal Support
    D-->>C: Filtered GPU information
Loading

Class diagram for GPUDetector changes

classDiagram
    class GPUDetector {
        +get_nvidia_gpu()
        +get_amd_gpu()
        +get_intel_gpu()
        +get_macos_gpu()
        +detect_best_gpu(gpu_template)
        -_read_gpu_memory(path_pattern, gpu_name, env_var)
    }
    note for GPUDetector "Simplified macOS GPU detection
Removed display info parsing"
Loading

File-Level Changes

Change Details Files
Refactor GPU detection to exclude display drivers.
  • Removed display driver information from the GPU detection output.
  • Modified the macOS GPU detection logic to only include relevant GPU information.
  • Updated the CLI output to reflect the changes in GPU detection.
ramalama/gpu_detector.py
ramalama/cli.py
Correct macOS GPU detection logic.
  • Removed the assumption that macOS detection covers AMD GPUs.
  • Refactored the macOS GPU detection logic to correctly parse the output of system_profiler.
  • Ensured that the last detected GPU is added to the list of GPUs.
ramalama/gpu_detector.py
Code cleanup and formatting.
  • Removed unused variables and code.
  • Added a space between the -ngl flag and its value in model.py.
  • Standardized logging format.
ramalama/gpu_detector.py
ramalama/model.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -304,7 +304,7 @@ def show_gpus_available_cli(args):

return {
"Detected GPUs": gpu_info if gpu_info else [{"GPU": "None", "VRAM": "N/A", "INFO": "No GPUs detected"}],
"INFO": errors if errors else "No errors"
"INFO": errors if errors else "No errors",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Replace if-expression with or (or-if-exp-identity)

Suggested change
"INFO": errors if errors else "No errors",
"INFO": errors or "No errors",


ExplanationHere we find ourselves setting a value if it evaluates to True, and otherwise
using a default.

The 'After' case is a bit easier to read and avoids the duplication of
input_currency.

It works because the left-hand side is evaluated first. If it evaluates to
true then currency will be set to this and the right-hand side will not be
evaluated. If it evaluates to false the right-hand side will be evaluated and
currency will be set to DEFAULT_CURRENCY.

@@ -46,7 +43,9 @@ def get_nvidia_gpu(self):
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Explicitly raise from a previous error [×3] (raise-from-previous-error)

output = subprocess.check_output(
["system_profiler", "SPDisplaysDataType"], text=True
)
output = subprocess.check_output(["system_profiler", "SPDisplaysDataType"], text=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Extract code out into method (extract-method)

@@ -157,7 +142,6 @@ def get_macos_gpu(self):
logging.error(f"Unexpected error while detecting macOS GPU: {e}")
return [{"GPU": "Unknown", "Error": str(e)}]


def detect_best_gpu(self, gpu_template):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Low code quality found in GPUDetector.detect_best_gpu - 24% (low-code-quality)


ExplanationThe quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Feb 2, 2025

I ran "make lint" on the codebase and it auto-formatted. A lot of non-functional changes that the AI bot commented on as low quality, some are debatable as they are style-based.

@rhatdan
Copy link
Member

rhatdan commented Feb 2, 2025

LGTM

@rhatdan rhatdan merged commit c5c4418 into main Feb 2, 2025
10 of 11 checks passed
@ericcurtin ericcurtin deleted the rm-display-driver-info branch February 2, 2025 22:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants