Design Principles¶
The tool is divided in two main parts:
Fetch mode: the tool will collect all the evidence that is required by checks. Note that during this phase nothing is checked, only evidence collection is performed. Fetchers typically need access to third-party services using specific credentials.
Check mode: run checks against the evidence stored in the local evidence locker. During this phase, checks can use the evidence that fetchers gathered in the previous “fetch” phase. They can also generate reports and create notifications. Checks must not access third-party services for gathering information. It is however permissible for check fixer functions to access third-party services.
Both fetch and check phases are run by unittest. This is very convenient
as fetchers and checks are loaded automatically by unittest
.
Evidence¶
Fetchers and checks manage evidence. We have defined five types of
evidence (see compliance.evidence
):
RawEvidence
: Gathered by fetchers and used by checks as input. For example, a list of users in GitHub. If necessary, raw evidence can be partitioned if the content is valid JSON. All evidence content is stored as text by default but raw evidence content can be stored as binary by setting thebinary_content=True
keyword argument key/value pair when constructing aRawEvidence
object.DerivedEvidence
: Gathered/Generated by fetchers and used by checks as input. Derived evidence is useful for those cases when a fetcher needs other evidence to perform computations over data collected in order to generate a new evidence. This new evidence is considered derived in the sense that its data is not the same as the source.TmpEvidence
: Gathered by fetchers and used by checks as input. This type of evidence is similar to RawEvidence but it is never get pushed to the remote git repository. This is useful for evidence that contains passwords or credentials.ExternalEvidence
: Planted in the locker with plant and used by checks as input. For example, a list of users in GitHub.ReportEvidence
: may be generated by checks. For instance, a report showing missing GitHub users.
See Compliance Fetchers section for conventions and expectations with respect to modifying RawEvidence.
All evidence has a settable ttl
(Time To Live) property that defines how
long an evidence should be considered valid. For instance, if new data is
generated on a daily basis then evidence gathered for that data should only be
valid for 1 day. For this reason, any check trying to use evidence with an
expired ttl
will error.
All evidence has an is_empty
property that defines an evidence’s empty
state. This provides value when monitoring evidence content for completness.
The property can be overridden to define “empty” for any given evidence. By default evidence is considered empty if it has no content, is all whitespace,
or if it is JSON and is an empty dictionary or list ({}
, []
).
Evidence Locker¶
The Locker
is a helper for storing evidence securely in a Git
repository. Locker
is responsible
for:
Storing evidence files properly in Git so changes can be tracked. Provide the
repo_url
to define the remote evidence locker location and the git configuration throughgitconfig
parameter as a dictionary. You can provide the user full name, email and also activate commit GPG siging (which is the recommended way). As an example of this, your config file might look like:{ "locker": { "repo_url": "https://github.com/my-org/my-evidence-repo", "gitconfig": { "commit": {"gpgsign": true}, "gpg": {"program": "gpg2"}, "user": { "signingKey": "AABBCCDD", "email": "compliance-robot@my-org.com", "name": "compliance-robot" } } } }
All git options are accepted. Set your git configuration with care, paying special attention to those attributes within the
core
section.Validating the
ttl
for a given evidence. An optional evidencettl
tolerance value can be configured to be applied during fetcher execution. This value (in seconds) tells fetchers to retrieve evidence that is nearly but not yet stale. If no value is supplied then fetchers will only retrieve new evidence afterttl
has expired. You can set the optionalttl_tolerance
value in your configuration JSON file like so:{ "locker": { "repo_url": "https://github.com/my-org/my-evidence-repo", "ttl_tolerance": 3600 } }
Check execution is not affected by this optional tolerance value because checks should only interact with evidence that is fresh (not stale).
It’s generally a good idea to regularly “archive” an evidence locker in favor of a fresh one. A yearly locker archive/refresh is a good guideline to follow. However in cases where checks may need to reference historical evidence, using a new locker will cause undesirable results in the short term. For cases like this referencing historical evidence from a previous locker is possible by using the
prev_repo_url
option. With that option set, a check that is unable to find historical evidence in the current evidence locker will be able to download the previous locker and look for the historical evidence there. Setting the option in your configuration JSON file would look similar to:{ "locker": { "repo_url": "https://github.com/my-org/my-evidence-repo", "prev_repo_url": "https://github.com/my-org/my-evidence-repo-old" } }
The previous locker will no longer be downloaded once the new locker is primed with enough historical evidence to support all checks.
A locker can grow large, causing CI/CD jobs to run longer than desired due to locker download time. So in addition to a sound locker archiving strategy, it is also possible to configure your locker to only download recent commits by using the
shallow_days
option. Setting the option in your configuration JSON file would look similar to:{ "locker": { "repo_url": "https://github.com/my-org/my-evidence-repo", "prev_repo_url": "https://github.com/my-org/my-evidence-repo-old", "shallow_days": 10 } }
When
shallow_days
is supplied, only commits since the current date minus the number of days set asshallow_days
are included in the locker download. The option applies to both the locker and the previous locker (if applicable).Remote hosting services (Github, Gitlab, BitBucket) typically have file size limitations that can vary from service instance to service instance. Exceeding a maximum file size will in turn cause the service managing your evidence locker to reject a remote locker Git push request. Unfortunately rejection notices from a service aren’t always the most descriptive so it often isn’t clear why your push request was rejected. To that end, prior to a remote push, the framework will log a list of “largely sized” files. The large file size threshold is configurable and can be set by using the
large_file_threshold
option. The value is in bytes and defaults to 50 MB. Setting the option in your configuration JSON file would look similar to:{ "locker": { "repo_url": "https://github.com/my-org/my-evidence-repo", "large_file_threshold": 50000000 } }
This should hopefully add some detail to a remote Git push rejection.
Compliance Fetchers¶
All the fetchers should be implemented as a child class of
ComplianceFetcher
. Note that this
class provides a set of methods that could be useful for saving some
code.
The run-time engine will collect all the fetchers and run all of them when –fetch option is provided.
The typical implementation of a ComplianceFetcher
would be like
this:
raw_evidence = fetch('the evidence')
locker.add_evidence(raw_evidence)
A fetcher should collect the data (from whatever source) and then store it straight to the locker. Thus, the fetcher should not modify any data from the source to keep it raw.
However, there are some changes that can be applied and do not modify the original meaning of the generated raw evidence. The aim of these exception are to avoid committing data into the locker that has not changed.
A few examples of what it is allowed:
Sorting (e.g. sort a JSON blob by keys)
Modifying data in a equivalent way. For instance, storing seconds instead of milli-seconds. A good rule of thumb for this could be: from the test code, would I be able to re-build the original value of the raw evidence?. If the answer is Yes, then it is likely that the modification is fine.
In any case, any modification of a new raw evidence must be approved and agreed by the reviewers. By default, do not modify the raw data. If you need to, then you should consider using derived evidence.
This is a list of modifications that are completely forbidden:
Adding live-generated data that does not come from the source.
Applying check-like logic (e.g. your data update if it includes an if). Checks should test the evidence, not fetchers.
Evidence Validation¶
A fetcher should only fetch data and store that data as evidence if the
current version of that evidence is stale (ttl
has expired). To that end
we’ve provided some helpful decorators and context managers that validate
ttl
for you and if necessary write the evidence to the evidence locker for
you after it has been fetched.
store_raw_evidence
andstore_tmp_evidence
decorators: Use one of these decorators on your fetcher method when you know the path and name of your raw or tmp evidence. The decorator takes as an argument, the path to your raw or tmp evidence as a string.
Usage example:
...
from compliance.evidence import store_raw_evidence
...
@store_raw_evidence('foo/evidence_bar.json')
fetch_foo_bar_evidence(self):
# Fetcher code only executes if evidence is stale
# Get the data from wherever
foo_bar_data = self._get_from_wherever(...)
# Return the content as a string
# The decorator will write it to the evidence locker
return json.dumps(foo_bar_data)
raw_evidence
andtmp_evidence
context managers: Use one of these context managers within your fetcher method when your fetcher retrieves multiple, similar raw or tmp evidence based on a dynamic set of configurable values. In other words the full name and content of evidence is based on a configuration and not known prior to execution of the fetcher logic. The context manager takes as arguments, a locker object and the path to your raw or tmp evidence as a string. The context manager yields the corresponding raw or tmp evidence object.Usage example:
... from compliance.evidence import raw_evidence ... fetch_foo_bar_evidence(self): for system in systems: evidence_path = 'foo/evidence_bar_{}.json'.format(system) with raw_evidence(self.locker, evidence_path) as evidence: # None is returned if evidence is not stale if evidence: # Get the data from wherever foo_bar_data = self._get_from_wherever(...) # Set the content as a string # Upon exit it is written to the evidence locker evidence.set_content(json.dumps(foo_bar_data))
Note
This approach will not produce multiple log lines when the fetcher is run as everything is executed within.
See
@parameterized
if you want to generate multiple running fetchers based on parameter set.store_derived_evidence
decorator: Use this decorator on your fetcher method when you know the paths and names of your source evidences and the path and name of your target derived evidence. The decorator takes as arguments, a list of source evidence paths as strings and a target derived evidence path as a string. It also passes the source evidences to the decorated method in the form of method arguments.Usage example:
... from compliance.evidence import store_derived_evidence ... @store_derived_evidence( ['raw/foo/evidence_bar.json', 'raw/foo/evidence_baz.json'], 'foo/derived_bar_baz.json' ) fetch_foo_bar_baz_derived_evidence(self, bar_evidence, baz_evidence): # Fetcher code only executes if evidence is stale # Construct your derived evidence derived_data = self._do_whatever(bar_evidence, baz_evidence) # Return the content as a string # The decorator will write it to the evidence locker return json.dumps(derived_data)
derived_evidence
context manager: Use this context manager within your fetcher method when your fetcher generates multiple, similar derived evidences based on a dynamic set of configurable values. In other words the name and content of the evidences are based on a configuration and not known prior to execution of the fetcher logic. The context manager takes as arguments, a locker object, source evidence paths and a target derived evidence path as a string. The source evidence paths can be in the form of a list of paths as strings, a dictionary of key/values pairs as strings where the key is an evidence short name and the value is the evidence path, or simply a single evidence path as a string. The context manager yields a dictionary containing the source and target evidences as the dictionary values. The source evidence key is its evidence path if a list of source paths were provided or its evidence short name if a dictionary of paths were provided or “source” if a single evidence path in the form of a string was provided. The target derived evidence key is always “derived”.Usage example (source list provided):
... from compliance.evidence import derived_evidence ... fetch_foo_bar_baz_derived_evidence(self): for system in systems: sources = ['raw/foo/evidence_bar.json', 'raw/foo/evidence_baz.json'] target = 'foo/derived_bar_baz_{}.json'.format(system) with derived_evidence(self.locker, sources, target) as evidences: # None is returned if target evidence is not stale if evidences: # Construct your derived evidence derived_data = self._do_whatever( evidences['raw/foo/evidence_bar.json'], evidences['raw/foo/evidence_baz.json'] ) # Set the content as a string # Upon exit it is written to the evidence locker evidences['derived'].set_content(json.dumps(derived_data))
Usage example (source dictionary provided):
... from compliance.evidence import derived_evidence ... fetch_foo_bar_baz_derived_evidence(self): for system in systems: sources = { 'bar': 'raw/foo/evidence_bar.json', 'baz': 'raw/foo/evidence_baz.json' } target = 'foo/derived_bar_baz_{}.json'.format(system) with derived_evidence(self.locker, sources, target) as evidences: # None is returned if target evidence is not stale if evidences: # Construct your derived evidence derived_data = self._do_whatever( evidences['bar'], evidences['baz'] ) # Set the content as a string # Upon exit it is written to the evidence locker evidences['derived'].set_content(json.dumps(derived_data))
Usage example (source string provided):
... from compliance.evidence import derived_evidence ... fetch_foo_bar_derived_evidence(self): for system in systems: source = 'raw/foo/evidence_bar.json' target = 'foo/derived_bar_{}.json'.format(system) with derived_evidence(self.locker, source, target) as evidences: # None is returned if target evidence is not stale if evidences: # Construct your derived evidence derived_data = self._do_whatever(evidences['source']) # Set the content as a string # Upon exit it is written to the evidence locker evidences['derived'].set_content(json.dumps(derived_data))
@parameterized
helper: it is often that a fetcher implementation is general enough to be used multiple by diferent parameters. A good example is a fetcher that collects resources of a cloud provider on several accounts. The implementation is exactly the same across the different accounts.One option to implement this is using the raw_evidence or tmp_evidence context-managers previously described. However, it has its own caveats. For instance, in the run log there will only be one fetcher execution although it would be great if each parameter generates a log line where it could be seen in detail what happened if something goes wrong.
parameterized is an external library that can be used for generating multiple fetchers at runtime.
Warning
parameterized
is not installed as part of the auditree-framework. Remember to get installed if you use it in your project!Usage example:
... from parameterized import parameterized ... def _get_domains(): return get_config().get('my.domains') @parameterized.expand(_get_domains) def fetch_foo_bar_evidence(self, domain): with raw_evidence(self.locker, f'user/{domain}_users.json') as evidence: if evidence: data = get(f'https://{domain}/users') evidence.set_content(json.dumps(data))
In this example, auditree will generate multiple
fetch_foo_bar_evidence
methods at runtime, one per domain obtained from the configuration.
Evidence Dependency Chaining¶
Sometimes a fetcher needs evidence gathered by another fetcher in order to
perform its fetching operation. For example, a fetcher may need to collect
hardware/software inventory based on certain accounts/environments gathered by
another fetcher or fetchers. Since order of execution cannot be guaranteed, it
is possible that a dependent fetcher (inventory) will run prior to the fetcher
that gathers the (accounts/environments) evidence that it depends on. In
order to ensure that dependent evidence is always gathered, use the
evidence.get_evidence_dependency
helper function in the dependent fetcher to
access the evidence that the fetcher depends on. Using this function
ensures re-execution of the fetcher in the event that the dependent evidence has
not yet been populated/refreshed due to fetcher order of execution. Once all
fetchers have executed, the framework will re-execute all fetchers that failed
due to an unavailable evidence dependency.
get_evidence_dependency
usage example:
...
from compliance.evidence import store_raw_evidence, get_evidence_dependency
...
@store_raw_evidence('foo/evidence_bar.json')
fetch_foo_bar_evidence(self):
baz_evidence = get_evidence_dependency(
'raw/foo/evidence_baz.json',
self.locker
)
foo_bar_data = self._get_from_wherever_using_baz(baz_evidence, ...)
...
return json.dumps(foo_bar_data)
Fetcher Execution¶
By default the Auditree framework will run all fetchers (tests prefixed by
fetch_
) that it can find. However, it is possible to limit fetcher
execution in bulk by using the --include
and/or exclude
CLI options
while providing a file path/name to a JSON config file containing a list of
fetchers to include/exclude. The format of the JSON config file is a list of
fetcher classes. Where a fetcher class is represented as a string dot notation
path to the fetcher class.
Fetcher include/exclude JSON config file example:
[
"fetcher_pkg.path_to_my_checks.checks.fetch_module_foo.FooFetcherClass",
"fetcher_pkg.path_to_my_checks.checks.fetch_module_bar.BarFetcherClass"
]
Compliance Checks¶
ComplianceCheck
is the parent class of
any set of checks that should be executed by the system. The run-time engine
will collect all the checks and run them when the --check
option is
provided on the command line.
Checks assume that all evidence is retrieved by fetchers. Consequently
checks should not be used to retrieve or store any RawEvidence
in the
evidence locker. Each check class may have from one to multiple checks defined
(that is, a check is a method prefixed with test_
in a check class). Each of
these checks will be executed by the Auditree framework with the following
possible results:
OK
: the check ran successfully and passed all validations.WARN
: the check ran successfully but issued warnings based on validation results. A warning can represent a possible failure in the future.FAIL
: the check ran successfully but did not pass all validations.ERROR
: the check stopped abruptly and was not able to complete all validations.
Evidence Validation¶
A check should only perform operations on evidence if the current version of
that evidence is not stale (ttl
has not expired). To that end
we’ve provided some helpful decorators and context managers that validate
ttl
for you and will ERROR
the check if evidence ttl
has expired
prior to executing the check’s logic.
with_raw_evidences
,with_derived_evidences
,with_tmp_evidences
, andwith_external_evidences
decorators: Use these decorators on your check method when you know the path and name of your raw, derived, tmp or external evidence. Each decorator takes as arguments, the paths to your evidence as strings or as evidenceLazyLoader
named tuples. EvidenceLazyLoader
haspath
andev_class
(evidence class) as attributes. If the requested evidence pass TTL validation the evidence is then passed along to the decorated method in the form of method arguments. Use an evidenceLazyLoader
when dealing with sub-classedRawEvidence
,DerivedEvidence
,TmpEvidence
, orExternalEvidence
, and you want the evidence provided to the decorated method to be cast as that sub-classed evidence otherwise use a string path and the evidence will be provided as the appropriate base evidence. ALazyLoader
named tuple can be constructed by executing thelazy_load
class method of any evidence class such asBarEvidence.lazy_load('foo/evidence_bar.json')
.
Usage example:
...
from compliance.evidence import with_raw_evidences
from my_pkg.bar_evidence import BarEvidence
...
@with_raw_evidence(
BarEvidence.lazy_load('foo/evidence_bar.json'),
'foo/evidence_baz.json'
)
test_bar_vs_baz(self, bar_evidence, baz_evidence):
# Check code only executes if evidence is not stale.
# Perform your check logic
failures, warnings, successes = self._do_whatever(
bar_evidence, baz_evidence
)
self.add_failures('bar vs. baz', failures)
self.add_warnings('bar vs. baz', warnings)
self.add_successes('bar vs. baz', successes)
evidences
context manager: Use this context manager within your check method when your check method acts on multiple, similar evidence based on a dynamic set of configurable values. In other words the full name and content of evidence is based on a configuration and not known prior to execution of the check logic. The context manager takes as arguments, the check (self
) object and either evidence paths strings orLazyLoader
named tuples. EvidenceLazyLoader
haspath
andev_class
(evidence class) as attributes. The evidence arguments can be in the form of a list of paths as strings orLazyLoader
named tuples, a dictionary of key/values pairs where the key is an evidence short name and the value is the evidence path as a string or aLazyLoader
named tuple, or simply a single evidence path as a string orLazyLoader
named tuple. The context manager yields a dictionary containing the evidence as the dictionary values if a list or dictionary of evidence paths orLazyLoader
named tuples are provided and yields an evidence object if a single evidence path as a string orLazyLoader
named tuple is provided. When a dictionary is yielded by the context manager, the evidence key is its evidence path if a list of evidence paths orLazyLoader
named tuples were provided or its evidence short name if a dictionary of evidence paths orLazyLoader
named tuples were provided. ALazyLoader
named tuple can be constructed by executing thelazy_load
class method of any evidence class such asBarEvidence.lazy_load('foo/evidence_bar.json')
.
Usage example (list provided):
...
from compliance.evidence import evidences
from my_pkg.bar_evidence import BarEvidence
...
test_bar_vs_baz(self):
for system in systems:
evidence_paths = [
BarEvidence.lazy_load('foo/evidence_bar.json'),
'raw/foo/evidence_baz.json'
]
with evidences(self, evidence_paths) as evidences:
# Check code only executes if evidence is not stale.
# Perform your check logic
failures, warnings, successes = self._do_whatever(
evidences['foo/evidence_bar.json'],
evidences['raw/foo/evidence_baz.json']
)
self.add_failures('bar vs. baz', failures)
self.add_warnings('bar vs. baz', warnings)
self.add_successes('bar vs. baz', successes)
Usage example (dictionary provided):
...
from compliance.evidence import evidences
from my_pkg.bar_evidence import BarEvidence
...
test_bar_vs_baz(self):
for system in systems:
evidence_paths = {
'bar': BarEvidence.lazy_load('foo/evidence_bar.json'),
'baz': 'raw/foo/evidence_baz.json'
}
with evidences(self, evidence_paths) as evidences:
# Check code only executes if evidence is not stale.
# Perform your check logic
failures, warnings, successes = self._do_whatever(
evidences['bar'],
evidences['baz']
)
self.add_failures('bar vs. baz', failures)
self.add_warnings('bar vs. baz', warnings)
self.add_successes('bar vs. baz', successes)
Usage example (string path provided):
...
from compliance.evidence import evidences
...
test_bar_stuff(self):
for system in systems:
evidence_path = 'raw/foo/evidence_bar.json'
with evidences(self, evidence_path) as evidence:
# Check code only executes if evidence is not stale.
# Perform your check logic
failures, warnings, successes = self._do_whatever(evidence)
self.add_failures('bar stuff', failures)
self.add_warnings('bar stuff', warnings)
self.add_successes('bar stuff', successes)
Usage example (LazyLoader
provided):
...
from compliance.evidence import evidences
from my_pkg.bar_evidence import BarEvidence
...
test_bar_stuff(self):
for system in systems:
lazy_evidence = BarEvidence.lazy_load('foo/evidence_bar.json')
with evidences(self, lazy_evidence) as evidence:
# Check code only executes if evidence is not stale.
# Perform your check logic
failures, warnings, successes = self._do_whatever(evidence)
self.add_failures('bar stuff', failures)
self.add_warnings('bar stuff', warnings)
self.add_successes('bar stuff', successes)
Check Execution¶
The Auditree framework executes checks (tests prefixed by test_
) based
on accreditation groupings defined in a controls.json
config file.
This is especially useful when targeting check result content to the
appropriate groups of people. The framework will by default look for
controls.json
in the current directory. It is possible to supply the
framework with alternate controls.json
location(s) by providing an
alternate path or paths at the end of a compliance check execution command via
the CLI. In the case of multiple locations, the framework will combine the
content of all controls.json
files found together. With this check to
accreditation mapping, the framework can execute checks based on the
accreditations passed to the framework by the CLI.
controls.json
content format example:
{
"chk_pkg.chk_cat_foo.checks.chk_module_foo.FooCheckClass": ["accred.one"],
"chk_pkg.chk_cat_bar.checks.chk_module_bar.BarCheckClass": ["accred.one", "accred.two"]
}
Fixers¶
After checks have been run, but before notifications or reports are
generated, the Auditree framework will optionally try to fix the
issues automatically. This is controlled with the --fix
option.
By default it is off
, and this is the mode that is used during the
daily CI runs in Travis. But you can also set it to dry-run
or on
.
In dry-run mode, the fixes are not actually run, but instead a message is printed out for each fix indicating what action would be attempted.
When fixes are run for real, they will attempt to perform the actions
listed in dry-run mode. If the fix succeeds, then a counter
fixed_failure_count
will be incremented. This counter is displayed
in the notification message.
See Fixers section for more information.
Report Builder¶
Once the execution of all checks and (optionally) fixers have been
executed, the ReportBuilder
generates
reports by inspecting each check and storing the results in the
locker. These reports are useful for providing detailed information
regarding what failures were found.
See Report Builder section for more information.
Notifiers¶
After reports have been generated, the tool will collect notification
messages from them and will create a
_BaseNotifier
object which deals with the
specific notification mechanism (e.g. send Slack message, print
messages to stdout, etc).
See Notifiers section for more information.
Execution Config¶
The Auditree framework is designed to be run locally from your PC or from a CI server like Jenkins or Travis. The execution can be tweaked at 2 levels:
Command line arguments: the tool accepts to be configured through the command line for most important bits (evidence repo location, notification mode, etc.)
Component specific: by using JSON files and
-C
option, you can specify configuration values for different components. For instance, if you use--notify slack
, then you can configure this component to send notifications to different people/channels based on the accreditation. See Notifiers section to see this example.
Credentials¶
There are 2 ways for providing credentials:
Local file: if you want to configure your credentials in a local file, you will have to provide the the framework using
--creds-path
option. This file should be similar to this:# -*- mode:conf -*- # Token for github.com to be used by the Locker [github] token=XXX # Webhook for Slack notifications [slack] webhook=XXX # token=XXX # can be used instead of webhook # Token for PagerDuty notifications [pagerduty] api_key=XXX events_integration_key=XXX
Environment variables: each section and field of the local file can be rendered as an environment variable. For instance, suppose your code requires
creds['github'].token
orcreds['slack'].webhook
. You just need to export:GITHUB_TOKEN = XXX
MY_SERVICE_API_KEY = YYY
This is equivalent to the credentials file:
[github] token=XXX [my_service] api_key=YYY
Creds with .env
files and 1Password¶
Combining the method based on passing env vars to Auditree and 1Password CLI, it is possible to grab the secrets from 1Password and inject them into Auditree. Here it is how to do it:
Create the following alias:
alias compliance="op run --env-file .env -- compliance"
In your fetchers/checks project, create an
.env
file with the following schema:<SECTION>_<ATTRIBUTE>="op://<VAULT>/<ITEM>/<FIELD>"
For example:
GITHUB_TOKEN="op://Private/github/token" MY_SERVICE_ORG="the-org-id" MY_SERVICE_API_KEY="op://Shared/my_service/api_key"
Now running
compliance
will pull credentials from 1Password vaults.