Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: container level security context #345

Merged
merged 5 commits into from
Jan 12, 2024

Conversation

llaszkie
Copy link
Contributor

@llaszkie llaszkie commented Dec 6, 2023

Hi!

The feature differentiate securityContext on the pod and container level. It makes the 'k6-operator` capable to operate on K8S >= 1.25 when PSS (Pod Security Standards) the built-in Pod Security Admission controller is in place and level is set to Restricted.

Sample usage:

apiVersion: k6.io/v1alpha1
kind: TestRun
spec:
  runner:
   securityContext:
     runAsNonRoot: true
   containerSecurityContext:
     allowPrivilegeEscalation: false

@CLAassistant
Copy link

CLAassistant commented Dec 6, 2023

CLA assistant check
All committers have signed the CLA.

@llaszkie
Copy link
Contributor Author

llaszkie commented Dec 6, 2023

fixes #297

@llaszkie llaszkie changed the title feat: container level security context, fixes #297 feat: container level security context Dec 6, 2023
@yorugac
Copy link
Collaborator

yorugac commented Dec 7, 2023

Hi @llaszkie, thanks on working on this!
FYI, I'll be able to review this PR fully on the next week. For now, just one question: you chose to add a new field to Pod. Kubernetes does have 2 separate objects for pod's and container's security context respectfully, but since our TestRun API doesn't expose containers, a new field has to be added with a totally new name containerSecurityContext. Have you considered making a wrapper struct securityContext instead of a new field that will contain fields from both objects and then can be reduced to each with .ToPodSecurityContext() and .ToContainerSecurityContext() methods? This is the approach that I wanted to explore in context of this issue but didn't have time to dig into details yet: hence, the question 🙂

@llaszkie
Copy link
Contributor Author

llaszkie commented Dec 8, 2023

Hi @yorugac ,

Yes - I was considering the wrapper solution - as suggested in the #297. But it could bring more challenges than benefits. In the K8S definitions those contexts are defined separately: pod here and container here, but they overlap in some attributes (sic!), like seccompProfile. And it is a valid use case (for example for pod volumes) - the overlap handling is defined in spec too. How then you would distinguish the user intention to set the one on the Container, but not on the Pod in your API exposing only combined wrapper?

What could be done instead is to explicitly expose Container in your API. It would even more align both models (K8S and K6) and solution could be cleaner? Let me know how about that approach.

@yorugac
Copy link
Collaborator

yorugac commented Dec 12, 2023

What could be done instead is to explicitly expose Container in your API

WDYM by that?

After some looking into this, yes, I agree that there's obviously a goal of exposing a more granular control over security context in Kubernetes definitions, hence two objects. It'd be nice if we could avoid over-complicating our API. I think the main problem with the current implementation is that containerSecurityContext is being passed only to the main container and not to init containers. It probably should be passed everywhere if it is defined.

@yorugac
Copy link
Collaborator

yorugac commented Dec 12, 2023

init containers

I.e. here: https://github.com/grafana/k6-operator/blob/main/pkg/resources/jobs/helpers.go#L120-L147

@llaszkie
Copy link
Contributor Author

What could be done instead is to explicitly expose Container in your API

I was thinking about extracting the container settings to a separate object embedded within the Pod. Probably too much overhead 😄

Thanks for the hint for the init containers - absolutely right. Fixed.

@yorugac
Copy link
Collaborator

yorugac commented Jan 5, 2024

@llaszkie, apologies for the delay. Thanks for the update; the changes seem to be fine 👍 But could you please rebase the branch against main? It can't be merged ATM.

@llaszkie
Copy link
Contributor Author

Good morning! @yorugac: rebased. I suppose we are good to go now :-)

Copy link
Collaborator

@yorugac yorugac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@llaszkie, I have to ask for 2 more things though both should be pretty quick:

  • please see my comment about Helm chart (I accidentally posted it as a comment instead of including into review)
  • it'd be nice to add a sample of TestRun definition with a new option into config/samples.

@yorugac yorugac added this to the 0.13 milestone Jan 10, 2024
@llaszkie llaszkie requested a review from yorugac January 11, 2024 08:41
Copy link
Collaborator

@yorugac yorugac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the updates, @llaszkie!

@yorugac yorugac merged commit 3118657 into grafana:main Jan 12, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants