Skip to content

Commit

Permalink
[#513] restore PTPA changes to feature branch
Browse files Browse the repository at this point in the history
To allow for the dev branch to be released, the changes in this commit
were reverted and moved to this feature branch.  The original work was
done in the following tickets:
 - #483 (9b4c2b6) merged in #487
 - #483 (33b597a) merged in #489
 - #488 (5bf5151) merged in #493
  • Loading branch information
ewilkins-csi committed Dec 20, 2024
1 parent b88232d commit 1dfdaa9
Show file tree
Hide file tree
Showing 10 changed files with 224 additions and 20 deletions.
12 changes: 8 additions & 4 deletions DRAFT_RELEASE_NOTES.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Major Additions

## Path to Production Alignment
To better align development processes with processes in CI/CD and higher environments, we no longer recommend using Tilt live-reloading. As such, upgrading projects should consider narrowing the scope of their Tiltfile. See _**How to Upgrade**_ for more information.
To better align development processes with processes in CI/CD and higher environments, we no longer recommend using Tilt for building and deploying projects. As such, upgrading projects should consider removing or at least narrowing the scope of their Tiltfile. See _**How to Upgrade**_ for more information.

## Data Access Upgrade
Data access through [GraphQL](https://graphql.org/) has been deprecated and replaced with [Trino](https://trino.io/). Trino is optimized for performing queries against large datasets by leveraging a distributed architecture that processes queries in parallel, enabling fast and scalable data retrieval.
Expand Down Expand Up @@ -71,6 +71,10 @@ To deactivate any of these migrations, add the following configuration to the `b

## Precondition Steps - Required for All Projects

### Maven Docker Build
To avoid duplicate docker builds, remove all the related `docker_build()` and `local_resources()` functions from your Tiltfile. Also, the `spark-worker-image.yaml` is no longer used
so `-deploy/src/main/resources/apps/spark-worker-image` directory ,and the related `k8s_yaml()` function from your Tiltfile can be removed.

### Beginning the Upgrade
To start your aiSSEMBLE upgrade, update your project's pom.xml to use the 1.11.0 version of the build-parent:
```xml
Expand All @@ -81,11 +85,11 @@ To start your aiSSEMBLE upgrade, update your project's pom.xml to use the 1.11.0
</parent>
```

### Tilt Docker Builds
To avoid duplicate docker builds, remove all the related `docker_build()` and `local_resources()` functions from your Tiltfile. Also, the `spark-worker-image.yaml` is no longer used so the `-deploy/src/main/resources/apps/spark-worker-image` directory and the related `k8s_yaml()` function from your Tiltfile can be removed.

## Conditional Steps

### For Projects Intending to Keep Tilt
To avoid duplicate docker builds, remove all the related `docker_build()` and `local_resources()` functions from your Tiltfile. Also, the `spark-worker-image.yaml` is no longer used so the `-deploy/src/main/resources/apps/spark-worker-image` directory and the related `k8s_yaml()` function from your Tiltfile can be removed.

## Final Steps - Required for All Projects
### Finalizing the Upgrade
1. Run `./mvnw org.technologybrewery.baton:baton-maven-plugin:baton-migrate` to apply the automatic migrations
Expand Down
65 changes: 49 additions & 16 deletions extensions/extensions-helm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,52 @@ Follow the instructions in the [_Kubernetes Artifacts Upgrade_](https://boozalle
section of the _Path to Production > Container Support_ documentation page to update older projects to the new Extensions Helm baseline approach.

## Developing with Extension Helm
* When completing and locally testing a migration of a module to Extensions Helm, the above steps can be taken.
* As a precursor to the above steps, it would be helpful to create a simple aiSSEMBLE project to use as a test-bed.
* It is important to note that because the module's helm charts will not be published to the Helm Repository remote
until a build via GitHub Actions is completed, the `repository` field in the module's `Chart.yaml` file must be updated
to point to the local aiSSEMBLE baseline code. More specifically, in the test-bed project, the `repository` field in
`<project-name>-deploy/src/main/resources/apps/<module-name>/Chart.yaml` should have hold a value in the following form:
`"file://../../../../../../../aissemble/extensions/extensions-helm/aissemble-<module-name>-chart"`
* the file path given is relative to the location of the `Chart.yaml` file, so 7 or more `../` prefixes will be
required to reach wherever the local aiSSEMBLE baseline is stored on your local machine
* in this example 7 `../` prefixes are added to the relative path, as the test-bed project sits in the same directory
as the local `aissemble` baseline code.
* Additionally, for local development only, the application's `Chart.yaml` in its corresponding `aissemble-<app-name>-chart`
should set the `version` and `appVersion` field to whatever the current aiSSEMBLE version is; this will allow for
testing of the local deployment when leveraging tilt
* If making use of additional aiSSEMBLE charts within your application's dependencies, the dependent subcharts should
have their `version` and `appVersion` updated to the current aiSSEMBLE version as well

When testing modifications to a Helm chart in a downstream project, special steps have to be taken as Helm charts are not published to the remote Helm Repository until a build
via GitHub Actions is completed. Firstly, all modifications to the aiSSEMBLE chart need to be committed and pushed to GitHub. Then, the chart downstream dependency needs to be
updated to point to the modified chart on your branch in GitHub. Unfortunately, this is [still not natively supported in
Helm](https://github.com/boozallen/aissemble/issues/488#issuecomment-2518466847), so we need to do some setup work to enable a plugin that provides this functionality.

### Add the plugin to ArgoCD

The repo server is responsible for running Helm commands to push changes into the cluster. This is
[documented](https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#using-initcontainers) in ArgoCD, however these instructions didn't work, at least with our current chart
version of 7.4.1. (Note there were some discussions on the `argocd-helm` GitHub about a specific update causing a breaking change in this functionality, and the latest docs were
the "fix" for the breaking change. So could be that the old instructions would have worked fine.) To do this, we'll add an init container to the repo server that installs the
plugin to a shared volume mount.

```yaml
aissemble-infrastructure-chart:
argo-cd:
repoServer:
env:
- name: HELM_CACHE_HOME
value: /helm-working-dir/.cache #Work around for install issue where plugins and cache locations being the same conflicts
initContainers:
- name: helm-plugin-install
image: alpine/helm
env:
- name: HELM_PLUGINS
value: /helm-working-dir/plugins #Configure Helm to write to the volume mount that the repo server uses
volumeMounts:
- mountPath: /helm-working-dir
name: helm-working-dir
command: [ "/bin/sh", "-c" ]
args: # install plugin
- apk --no-cache add curl;
helm plugin install https://github.com/aslafy-z/helm-git --version 1.3.0;
chmod -R 777 $HELM_PLUGINS;
```
### Updating the chart dependency
To use your modified chart in the downstream project, the following changes should be made to the `Chart.yaml` file that pulls in the modified chart as a dependency:

* Point `repository` to the modified chart on your branch in GitHub
* e.g.: `git+https://github.com/boozallen/aissemble/@extensions/extensions-helm/<modified-chart>?ref=<your-branch>`
* _**NB:** if the chart being tested is in a nested project under extensions-helm, update the repo path accordingly_
* Set `version` to `1.0.0`

### Potential pitfalls

* There is an issue with committing Chart.lock files when using an explicit repository vs a repository alias, so Chart.lock files must not be committed.
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@ import javax.lang.model.SourceVersion

final Logger logger = LoggerFactory.getLogger("com.boozallen.aissemble.foundation.archetype-post-generate")

def dir = new File(new File(request.outputDirectory), request.artifactId)
file = new File(dir,"deploy.sh")
file.setExecutable(true, false)

def groupIdRegex = ~'^[a-z][a-z0-9]*(?:\\.[a-z][a-z0-9]*)*$' // lowercase letters, numbers, and periods
def artifactIdRegex = ~'^[a-z][a-z0-9]*(?:-?[\\da-z]+)*$' // lowercase letters, numbers, and hyphens
def versionRegex = ~'^(0|[1-9]\\d*)\\.(0|[1-9]\\d*)\\.(0|[1-9]\\d*)(?:-((?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\\.(?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\\+([0-9a-zA-Z-]+(?:\\.[0-9a-zA-Z-]+)*))?$' // Semantic Versioning
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@
<directory/>
<includes>
<include>Tiltfile</include>
<include>deploy.sh</include>
<include>.tiltignore</include>
<include>devops/**</include>
<include>jenkinsPipelineSteps.groovy</include>
Expand Down Expand Up @@ -106,6 +107,20 @@
</fileSet>
</fileSets>
</module>
<module id="${rootArtifactId}-infrastructure"
dir="__rootArtifactId__-infrastructure" name="${rootArtifactId}-infrastructure">
<fileSets>
<fileSet filtered="true" encoding="UTF-8">
<directory/>
<includes>
<include>*/**</include>
</includes>
<excludes>
<exclude>pom.xml</exclude>
</excludes>
</fileSet>
</fileSets>
</module>
<module id="${rootArtifactId}-shared"
dir="__rootArtifactId__-shared" name="${rootArtifactId}-shared">
</module>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
pom.xml
target
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v2
name: "${parentArtifactId}"
version: "${version}"
appVersion: "${version}"

dependencies:
- name: aissemble-infrastructure-chart
version: ${archetypeVersion}
repository: oci://ghcr.io/boozallen
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>

<parent>
<groupId>${groupId}</groupId>
<artifactId>${parentArtifactId}</artifactId>
<version>${version}</version>
</parent>

<artifactId>${parentArtifactId}-infrastructure</artifactId>
<name>${projectName}::Infrastructure</name>
<description>Contains the infrastructure artifacts for ${projectName}</description>
<packaging>helm</packaging>

<build>
<plugins>
<plugin>
<groupId>${group.helm.plugin}</groupId>
<artifactId>helm-maven-plugin</artifactId>
<extensions>true</extensions>
</plugin>
</plugins>
</build>

</project>
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Deploys ArgoCD with anonymous admin access enabled.
# ArgoCD will be available at http://localhost:30080/
aissemble-infrastructure-chart:
jenkins:
enabled: false
ingress-nginx:
enabled: false
argo-cd:
crds:
keep: false
configs:
cm:
admin.enabled: false
users.anonymous.enabled: true
rbac:
policy.default: "role:admin"
server:
ingress:
enabled: false
service:
type: "NodePort"
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/bin/sh

APP_NAME=${artifactId}
INFRA_NAME=infrastructure

print_usage() {
echo "Usage: $0 [up|down|shutdown]"
echo " startup create/upgrade deployment infrastructure"
echo " shutdown tear down deployment infrastructure (tears down application if needed)"
echo " up deploy application (starts deployment infrastructure if needed)"
echo " down tear down application"
}

startup() {
echo "Deploying infrastructure..."
helm upgrade --install $INFRA_NAME ${artifactId}-infrastructure \
--values ${artifactId}-infrastructure/values.yaml \
--values ${artifactId}-infrastructure/values-dev.yaml
if ! kubectl rollout status --namespace argocd deployment/argocd-server --timeout=30s; then
exit $?
fi
argocd repo add ${projectGitUrl} --server localhost:30080 --plaintext --insecure-skip-server-verification
}

is_app_running() {
argocd app get $APP_NAME --server localhost:30080 --plaintext > /dev/null 2>&1
}

deploy() {
echo "Checking for deployment infrastructure..."
helm status $INFRA_NAME > /dev/null 2>&1
if [ $? -ne 0 ]; then
startup
fi

if is_app_running; then
echo "${artifactId} is deployed"
else
branch=$(git rev-parse --abbrev-ref HEAD)
echo "Deploying ${artifactId} from branch '$branch'..."
argocd app create $APP_NAME \
--server localhost:30080 --plaintext \
--dest-namespace ${artifactId} \
--dest-server https://kubernetes.default.svc \
--repo ${projectGitUrl} \
--path ${artifactId}-deploy/src/main/resources \
--revision $branch \
--helm-set spec.targetRevision=$branch \
--values values.yaml \
--values values-dev.yaml \
--sync-policy automated
fi
}

down() {
if is_app_running; then
echo "Tearing down app..."
argocd app delete $APP_NAME --server localhost:30080 --plaintext --yes
else
echo "${artifactId} is not deployed"
fi
}

shutdown() {
helm status $INFRA_NAME > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Infrastructure already shutdown"
else
if is_app_running; then
down
fi
echo "Shutting down infrastructure..."
helm uninstall $INFRA_NAME
fi
}


if [ "$1" = "up" ]; then
deploy
elif [ "$1" = "down" ]; then
down
elif [ "$1" = "shutdown" ]; then
shutdown
elif [ "$1" = "startup" ]; then
startup
else
print_usage
fi

0 comments on commit 1dfdaa9

Please sign in to comment.