2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
Instant Microservices: Rules for Logic and Security
A Comprehensive Guide to Cloud Monitoring Tools: Ensuring Optimal Performance and Security
I wrote previously about the default configuration of Spring oauth-authorization-server. Now let's jump into how we can customize it to suit our requirements. Starting with this article, we will discuss how we can customize the JWT token claims with default configurations (though you can change them as per your requirement). The default access_token claims are: JSON { "iss": "http://localhost:6060", "sub": "spring-test", "aud": "spring-test", "nbf": 1697183856, "exp": 1697184156, "iat": 1697183856 } After customization with additional claims (roles, email, ssn and username), it looks like: JSON { "sub": "spring-test", "aud": "spring-test", "nbf": 1699198349, "roles": [ "admin", "user" ], "iss": "http://localhost:6060", "exp": 1699198649, "iat": 1699198349, "client_id": "spring-test", "email": "test-user@d3softtech.com", "ssn": "197611119877", "username": "test-user" } Let's see how we can achieve that in the Spring Authorization Server. Spring provides the OAuth2TokenCustomizer<T extends OAuth2TokenContext> interface (FunctionalInterface) to customize the OAuth2Token which can be used to customize any token issued by Spring OAuth Server. Java @FunctionalInterface public interface OAuth2TokenCustomizer<T extends OAuth2TokenContext> { /** * Customize the OAuth 2.0 Token attributes. * * @param context the context containing the OAuth 2.0 Token attributes */ void customize(T context); } Therefore, to provide the customizer to the Spring context, define a bean using configuration. You can define one or more customizers to support different token flows. Single Customizer If there is a requirement to customize the token for a single flow, it can be defined with Customizer as a bean, like the one below for a client-credential (grant-type) token. Java @Configuration public class AuthorizationServerConfiguration { @Bean protected OAuth2TokenCustomizer<JwtEncodingContext> jwtCustomizer() { return jwtContext -> { if (CLIENT_CREDENTIALS.equals(jwtContext.getAuthorizationGrantType()) && ACCESS_TOKEN.equals( jwtContext.getTokenType())) { OAuth2ClientCredentialsAuthenticationToken clientCredentialsAuthentication = jwtContext.getAuthorizationGrant(); Map<String, Object> additionalParameters = clientCredentialsAuthentication.getAdditionalParameters(); additionalParameters.forEach((key, value) -> jwtContext.getClaims().claim(key, value)); } }; } } First, it checks for the flow (client-credential, code, etc.) and then pulls the additional parameters from the request and adds them to the JwtContext. Once added to JwtContext, it will be added to JWT claims in response. Additional parameters in the request can be provided as the query param or as a body, such as: Query Param In the test (refer to AuthorizationServerTest.verifyTokenEndpoint_WithAdditionParamsAsQueryParam): Java webTestClient.post() .uri(uriBuilder -> uriBuilder.path("/oauth2/token").queryParam("grant_type", "client_credentials") .queryParam("email", TEST_USER_EMAIL).queryParam("ssn", TEST_USER_SSN) .queryParam("username", TEST_USER_NAME).queryParam("roles", Set.of("admin", "user")).build()) .headers(httpHeaders -> httpHeaders.setBasicAuth("spring-test", "test-secret")).exchange() .expectStatus().isOk() .expectBody() .jsonPath("$.access_token").value(this::verifyAccessToken) .jsonPath("$.token_type").isEqualTo("Bearer") .jsonPath("$.expires_in").isEqualTo(299); In the example above, a POST request is used to invoke the /oauth2/token endpoint of the authorization server to get the access-token. The minimum parameters required by the authorization server are: grant_type client_id (as header) client_secret (as header) All the other parameters are additional parameters that you can provide to customize the access_token. As in the above example, we have added email, ssn, username and roles as additional parameters. Body Param In the test (refer to AuthorizationServerTest.verifyTokenEndpoint): Java MultiValueMap<String, String> tokenRequestParams = new LinkedMultiValueMap<>(); tokenRequestParams.add("grant_type", CLIENT_CREDENTIALS.getValue()); tokenRequestParams.add("email", TEST_CLIENT_ID); tokenRequestParams.add("ssn", TEST_SECRET); tokenRequestParams.add("username", TEST_CLIENT_ID); tokenRequestParams.add("roles", TEST_SECRET); webTestClient.post() .uri(uriBuilder -> uriBuilder.path("/oauth2/token").build()) .contentType(MediaType.APPLICATION_FORM_URLENCODED) .body(BodyInserters.fromFormData(tokenRequestParams)) .headers(httpHeaders -> httpHeaders.setBasicAuth("spring-test", "test-secret")) .exchange() .expectStatus().isOk() .expectBody() .jsonPath("$.access_token").exists() .jsonPath("$.token_type").isEqualTo("Bearer") .jsonPath("$.expires_in").isEqualTo(299); Parameters to the oauth2/token endpoint can be provided as a body to the POST request. In the above example, client_id and client_secret were passed as basic auth headers, and in this case, as body params. Multiple Customizer If there is a need to customize the token for multiple flows, we can take the approach of delegate customizer. The delegate customizer will delegate the request to all custom customizers defined, and therefore, the token will be customized by one or more who are responsible for the through-filter criteria defined in that customizer. Let's take an example where we want to customize the token for client-credentials and code flow. To do so, we will first define a delegate customizer as: Java @Component public class OAuth2TokenCustomizerDelegate implements OAuth2TokenCustomizer<JwtEncodingContext> { private List<OAuth2TokenCustomizer<JwtEncodingContext>> oAuth2TokenCustomizers; public OAuth2TokenCustomizerDelegate() { oAuth2TokenCustomizers = List.of( new OAuth2AuthorizationCodeTokenCustomizer(), new OAuth2ClientCredentialsTokenCustomizer()); } @Override public void customize(JwtEncodingContext context) { oAuth2TokenCustomizers.forEach(tokenCustomizer -> tokenCustomizer.customize(context)); } } As the delegate customizer is defined as a component, it will be consumed by Spring as a bean and will be added to the application context as OAuth2TokenCustomizer. With every request for token creation, a request will be delegated to this customizer to customize. Now we can define our own customizers that will customize the token according to our needs. Client-Credentials Token Customizer Java public class OAuth2ClientCredentialsTokenCustomizer implements OAuth2TokenCustomizer<JwtEncodingContext> { @Override public void customize(JwtEncodingContext jwtContext) { if (CLIENT_CREDENTIALS.equals(jwtContext.getAuthorizationGrantType()) && ACCESS_TOKEN.equals( jwtContext.getTokenType())) { OAuth2ClientCredentialsAuthenticationToken clientCredentialsAuthentication = jwtContext.getAuthorizationGrant(); Map<String, Object> additionalParameters = clientCredentialsAuthentication.getAdditionalParameters(); additionalParameters.forEach((key, value) -> jwtContext.getClaims().claim(key, value)); } } } OAuth2ClientCredentialsTokenCustomizer will be responsible for client-credential and grant-type (flow). It will check if the request needs to be handled or not by checking the grant-type and token-type. Authorization-Code Token Customizer Java public class OAuth2AuthorizationCodeTokenCustomizer implements OAuth2TokenCustomizer<JwtEncodingContext> { @Override public void customize(JwtEncodingContext jwtContext) { if (AUTHORIZATION_CODE.equals(jwtContext.getAuthorizationGrantType()) && ACCESS_TOKEN.equals( jwtContext.getTokenType())) { OAuth2AuthorizationCodeAuthenticationToken oAuth2AuthorizationCodeAuthenticationToken = jwtContext.getAuthorizationGrant(); Map<String, Object> additionalParameters = oAuth2AuthorizationCodeAuthenticationToken.getAdditionalParameters(); additionalParameters.forEach((key, value) -> jwtContext.getClaims().claim(key, value)); } } } OAuth2AuthorizationCodeTokenCustomizer will be responsible as the name suggested for authorization-code grant-type (code flow). Sample code The sample code can be found here. The functional test class AuthorizationServerTest has steps on how to: Initiate the code flow with the authorized endpoint with the required parameters Authenticate the user Collect code after successful authentication Exchange code for tokens Introspect token Refresh token Revoke tokens Introspect post revocation I hope this post will help you in customizing the tokens.
Kubernetes (K8s), an open-source container orchestration system, has become the de-facto standard for running containerized workloads thanks to its scalability and resilience. Although K8s has the capabilities to streamline deployment processes, the actual deployment of applications can be cumbersome, since deploying an app to a K8s cluster typically involves managing multiple K8s manifests (like Deployment, Service, ConfigMap, Secret, Ingress, etc.) in YAML format. This isn't ideal because it introduces additional operational overhead due to the increased number of files for one app. Moreover, it often leads to duplicated, copy-pasted sections of the same app across different environments, making it more susceptible to human errors. Helm, a popular package manager for Kubernetes, is designed to solve these deployment issues and help us manage K8s apps. Helm Charts and Helm Secrets Helm provides a straightforward way to define, install, and upgrade apps on K8s clusters. It is based on reusable templates called Helm charts, which encapsulate all the necessary K8s manifests, configurations, and dependencies into one single package, making it a whole lot easier to consistently version-control, publish/share, and deploy apps. Since most apps rely on configurations, Helm charts often rely on ConfigMaps and Secrets (for sensitive information) to pass values to the apps as environment variables, following the Twelve-Factor App methodology. However, handling secrets in Helm can be challenging due to security concerns and collaboration/access control reasons: Security concerns: Secrets contain sensitive information such as passwords, API keys, and database credentials. Managing secrets securely is crucial to protect sensitive data from unauthorized access. Helm must ensure that secrets are properly encrypted, stored, and transmitted. Collaboration/access control: Helm promotes collaboration among teams by sharing charts and configurations. However, if secrets are included in these shared charts, controlling access becomes challenging. Ensuring that only authorized individuals can access and modify secrets is crucial for maintaining security and compliance. In this article, we aim to provide comprehensive solutions to solve these challenges once and for all. Here's what you can expect: Hands-on Tutorials: We will guide you through practical tutorials that demonstrate the usage of specialized tools such as the helm-secrets plugin and the External Secrets Operator. These tutorials will equip you with the knowledge and skills to effectively manage secrets in Helm deployments. Integration With CI/CD: Helm secrets are also a critical aspect of CI/CD pipelines. We will explore the possibilities, pros, and cons of CI/CD integrations, making sure Helm secrets and security are handled not only in manual deployment but also in automated workflows. Secrets Rotation: Secrets rotation is a necessary security practice. We will introduce a tool that simplifies the redeployment of apps when secrets rotation occurs. Tool Comparison and FAQs: To assist you in selecting the right tool for your specific needs, we will list the pros and cons and a flowchart to help you decide. Additionally, we will address some frequently asked questions to further clarify any doubts you may have. Without further ado, let's dive in and explore the solutions to revolutionize your secrets management within Helm charts. Tutorial: The helm-secrets Plugin As mentioned in the previous section, managing secrets for Helm charts can be challenging because Helm chart Secrets contain sensitive information, and it's difficult to control access between team members. With that in mind, there are two (and maybe only two) approaches to managing secrets in Helm charts: either we store sensitive information in Helm charts encrypted, or we don't store them in the charts at all: Encrypt the Secrets values in the values file of the Helm charts. This way, we ensure the chart is safe to share and check into version control without worrying about leaking sensitive information. We still need to figure out a control mechanism to share access (decrypt the values) among team members. Do not store the Secrets values in the Helm charts at all. This way, the sensitive information isn't in the Helm charts at all, so it's completely secure. Then we need to figure out a way to actually deploy Secrets into K8s clusters. A Quick Introduction to the helm-secrets Plugin TL;DR: the helm-secrets plugin can work in both of the two above-mentioned methods: Encrypting sensitive information in the Helm charts and decrypting the values on the fly. Storing secrets elsewhere (like in a secrets manager) and injecting them into the Helm charts when Helm deployments happen. How helm-secrets works OK, now the long version. Helm-secrets is a Helm plugin that manages secrets. What's a Helm plugin, then? Good question. Helm plugins are extensions that add additional functionalities and capabilities to Helm. These can be new commands, features, or integrations. Plugins can be used to perform various tasks, such as managing secrets, encrypting values, validating charts, etc. To use a Helm plugin, we typically install it using the helm plugin install command, and then we can invoke the plugin's commands just like any other native Helm command. With helm-secrets, the values can be stored encrypted in Helm charts. However, the plugin does not handle the encryption/decryption operations itself; it offloads and delegates the cryptographic work to another tool: SOPS SOPS, short for Secrets OPerationS, is an open-source text file editor by Mozilla that encrypts/decrypts files automatically. With SOPS, when we write a file, SOPS automatically encrypts the file before saving it to the disk. For that, it uses the encryption key of our choice: this can be a PGP key, an AWS KMS key, or many others. helm-secrets can also work in a "cloud" mode, where secrets are not stored in the Helm chart, but in a cloud secret manager. We then simply refer to the path of the secret in the cloud in the file, and the secret is automatically injected upon invoking helm install. Example 1: helm-secrets With SOPS In this example, we will store encrypted secrets in the Helm charts. Since this relies on SOPS, we first need to install it. The easiest way to install SOPS is via brew: brew install sops For other OS users, refer to the official GitHub repo of SOPS. Then, let's configure SOPS to use PGP keys for encryption: brew install gnupg If you are using another OS, for example, Linux, you can use the corresponding package manager. Most likely, this would work: sudo apt-get install gnupg With GnuPG, Creating a key is as simple as the following (remember to put your name as the value of KEY_NAME): export KEY_NAME="Tiexin Guo" export KEY_COMMENT="test key for sops" gpg --batch --full-generate-key <<EOF %no-protection Key-Type: 1 Key-Length: 4096 Subkey-Type: 1 Subkey-Length: 4096 Expire-Date: 0 Name-Comment: ${KEY_COMMENT} Name-Real: ${KEY_NAME} EOF Get the GPG key fingerprint: $ gpg --list-secret-keys "${KEY_NAME}" gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u sec rsa4096 2023-10-17 [SCEAR] BE574406FE117762E9F4C8B01CB98A820DCBA0FC uid [ultimate] Tiexin Guo (test key for sops) ssb rsa4096 2023-10-17 [SEAR] In the "pub" part of the output, you can get the GPG key fingerprint (in my case, it's "BE574406FE117762E9F4C8B01CB98A820DCBA0FC"). Then we need to configure SOPS to use this PGP key for encryption/decryption. To do so, create a file named .sops.yaml under your $HOME directory with the following content: creation_rules: - pgp: >- BE574406FE117762E9F4C8B01CB98A820DCBA0FC Remember to replace the key fingerprint generated in the previous step. Installing and configuring SOPS with PGP keys is not simple; refer to my blog on SOPS for more details. Finally, we can install helm-secrets. Click here to get the latest version (at the time of writing, the latest version is v4.5.1). Then, run the following command to install: helm plugin install https://github.com/jkroepke/helm-secrets --version v4.5.1 Let's create a secret file and name it as credentials.yaml.dec with the following content: password: test To encrypt this file using helm-secrets, run the following command: helm secrets encrypt credentials.yaml.dec > credentials.yaml If you open the generated credentials.yaml file, you will see that its content is encrypted by SOPS. Next, we can refer to the encrypted value in the Helm chart's Secrets. Suppose we have a file named your-chart/templates/secrets.yaml with the following content: apiVersion: v1 kind: Secret metadata: name: helloworld labels: app: helloworld chart: "{{ .Chart.Name }-{{ .Chart.Version }" release: "{{ .Release.Name }" heritage: "{{ .Release.Service }" type: Opaque data: password: {{ .Values.password | b64enc | quote } This template will use the value from the helm-secrets encrypted credentials.yaml. When installing the Helm chart, instead of using helm install, use helm secrets install: helm secrets install release-name -f values.yaml -f credentials.yaml your-chart Remember not to submit the credentials.yaml.dec file to git repositories as it contains clear text passwords. However, the SOPS encrypted result credentials.yaml can be submitted as part of the Helm chart (although it is generally recommended not to include secret values in Helm charts). If you choose to submit encrypted values, make sure to use a private chart repository for internal use only. In this case, we can store sensitive information in an encrypted values file. To share the encrypted values file with your team members, you can add their public key fingerprints to the SOPS configuration. This way, access control is managed by SOPS rather than helm-secrets. SOPS supports various encryption methods, such as using AWS KMS keys for encryption and sharing access through AWS IAM policies. For more encryption methods supported by SOPS, refer to my other blog. For more details on using helm-secrets with SOPS, refer to the official doc here. Example 2: helm-secrets With Cloud Secrets Managers In the previous example, we discussed how to store sensitive information in an encrypted form within the values file. However, helm-secrets offers another mode that integrates with popular cloud secrets managers like Hashicorp Vault or AWS Secrets Manager. To enable the integration with cloud secrets managers, you need to set the environment variable HELM_SECRETS_BACKEND=vals before running Helm. This will activate the vals integration in helm-secrets: export HELM_SECRETS_BACKEND=vals Vals is a tool specifically designed for managing secret values in Helm. It requires cloud provider credentials to fetch secrets from the secret services. Make sure you have the necessary credentials in place before attempting to use them. Let's assume you have a file called secrets.yaml located at your-chart/templates/secrets.yaml. Here's an example of its content: apiVersion: v1 kind: Secret metadata: name: helloworld labels: app: helloworld chart: "{{ .Chart.Name }-{{ .Chart.Version }" release: "{{ .Release.Name }" heritage: "{{ .Release.Service }" type: Opaque data: password: '{{ .Values.password | b64enc }' In your values.yaml file, you can include the following snippet: password: ref+awssecrets://path/to/my/secret/value Finally, you can install everything together using the following command: helm secrets install release-name -f values.yaml your-chart This command will inject the secret value from AWS Secrets Manager, located at "path/to/my/secret/value", into the variable "password" defined in the values file. Simplifying Continuous Deployment Integration With helm-secrets If you're already using SOPS, then helm-secrets is a great choice for seamless integration. It also offers cloud integrations if you prefer not to store encrypted data in values files. While helm-secrets can be integrated with major CD tools like Argo CD, there is some operational overhead involved. This is because both SOPS and the helm-secrets plugin are required by the CD tool, as shown in the previous examples. For instance, to integrate Argo CD with helm-secrets, you need to ensure that the Argo CD server container has both SOPS and helm-secrets. This can be achieved by building a customized Docker image; more details can be found here. Another option is to install SOPS or vals and helm-secret through an init container on the argocd-repo-server Deployment. This requires changing the initContainers args. More details can be found here. However, both options have their drawbacks. Customizing the Docker image means maintaining an additional image while customizing initContainers commands results in a more complex values file for Argo CD, which can be challenging to maintain. Is there a better or alternative way to manage Helm secrets? Let's continue exploring. External Secrets Operator A Quick Introduction to External Secrets Operator The External Secrets Operator is a K8s operator that facilitates the integration of external secret management systems, such as AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, and Azure Key Vault, with K8s. Simply put, this operator automatically retrieves secrets from these secrets managers using external APIs and injects them into Kubernetes Secrets. Unlike helm-secrets which either stores encrypted data in the values file using another tool (SOPS) or refers to secrets stored in cloud secrets managers in the values file, the External Secrets Operator does not require including secrets.yaml as part of the Helm templates. It uses another custom resource ExternalSecret, which contains the reference to cloud secrets managers. What does the custom resource do? Let's dive deeper to take a look under the hood of External Secrets Operator. How External Secrets Operator Works Here's an overview of how the External Secrets Operator works: SecretStore configuration: First, we define a SecretStore resource that specifies the connection details and authentication credentials for the external secret management system with which we want to integrate. ExternalSecret Configuration: Next, we create an ExternalSecret resource that defines the mapping between the external secrets and Kubernetes Secrets. Syncing secrets: The External Secrets Operator continuously monitors the ExternalSecret resources. When a new or updated ExternalSecret is detected, the operator retrieves the specified secrets from the external secret management system using the configured SecretStore. Automatic Synchronization: The External Secrets Operator periodically synchronizes the secrets based on a defined refresh interval. External-Secrets Helm Chart Example Let's see an example of using external secrets to manage Helm secrets. The idea is simple: we do not include K8s Secrets as part of the Helm chart templates, but rather, we use ExternalSecret, which contains no sensitive information at all. First, let's install the External Secret Operator itself: helm repo add external-secrets https://charts.external-secrets.io helm repo update helm install external-secrets \ external-secrets/external-secrets \ -n external-secrets \ --create-namespace We need to make sure access to AWS Secrets Manager is granted to the external secret operator. For a quick test with AWS Secrets Manager, we can create a secret containing our AWS credentials (do not do this in production) with access to Secrets Manager. Execute the following commands: echo -n 'KEYID' > ./access-key echo -n 'SECRETKEY' > ./secret-access-key kubectl create secret generic awssm-secret --from-file=./access-key --from-file=./secret-access-key Make sure the access key has permission to access AWS Secrets Manager. For more information, check out AWS IAM policies. This approach (access key as a K8s Secret) is only suitable for tutorials. For a production environment, it is recommended to use IAM roles for service accounts. Refer to the AWS official documentation and the External Secret Operator official documentation for more details. Then, let's create a SecretStore pointing to AWS Secrets Manager in a specific account and region. Create a file named secretstore.yaml with the following content: apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: secretstore-sample spec: provider: aws: service: SecretsManager region: ap-southeast-1 auth: secretRef: accessKeyIDSecretRef: name: awssm-secret key: access-key secretAccessKeySecretRef: name: awssm-secret key: secret-access-key Apply the configuration by running the command: kubectl apply -f secretstore.yaml Make sure to update the region with the appropriate AWS Secrets Manager region where your secrets are stored. Then we can create an ExternalSecret as part of a Helm chart template that synchronizes a secret from AWS Secrets Manager as a Kubernetes Secret. Create a file named your-chart/templates/externalsecret.yaml with the following content: apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: example spec: refreshInterval: 1h secretStoreRef: name: secretstore-sample kind: SecretStore target: name: helloworld creationPolicy: Owner data: - secretKey: password remoteRef: key: MyTestSecret1 property: password The above ExternalSecret will create a K8s Secret named "helloworld" (specified in the "target" section) with one key "password", whose value is retrieved from MyTestSecret1.password in the AWS Secrets Manager. Without changing anything else, we can simply install the chart by running: helm install release-name -f values.yaml your-chart After installation, we can verify the Secret is already created: kubectl describe secret helloworld In the output, you should see the details of the created K8s Secret, which is automatically synchronized by the External Secrets Operator, fetching values from AWS Secrets Manager. We can now utilize envFrom and secretRef in the Helm chart's Deployment to pass these secret values as environment variables. Seamless Integration of External Secrets Operator with Continuous Deployment Tools Unlike helm-secrets, which necessitates the installation of plugins to Helm and additional command-line tools like SOPS, the External Secrets Operator offers a more streamlined approach. It does not require any modifications to Helm or local binaries. Instead, it is solely installed on the K8s cluster side, utilizing an operator that needs to be installed and a SecretStore custom resource that must be defined. Due to this inherent simplicity, integrating the external secret operator with continuous deployment tools such as Argo CD is effortless and trouble-free. No changes need to be made on the continuous deployment tool side. The only modification required is to the Helm chart itself: Secrets should not be included in the Helm chart template; instead, ExternalSecret should be used in the templates. Vault Secrets Operator for Kubernetes Secrets and Helm Secrets There is another major choice regarding managing secrets for Helm charts: the Vault Secrets Operator. The Vault Secrets Operator works more or less similarly to the External Secrets Operator, but since it's made by HashiCorp, it only works with HashiCorp Vault. It works by watching for changes in the vault and synchronizing from the vault to a K8s Secret. The operator writes the source secret data directly to the destination Kubernetes Secret, ensuring that any changes made to the source are replicated to the destination over its lifetime. It's worth noting that the External Secrets Operator also works with HashiCorp Vault. Still, if you are already using HashiCorp Vault, maybe the vault secrets operator can be a better choice since it's specifically designed for HashiCorp Vault and provides additional features tailored to Vault's capabilities, like dynamic secrets. How to Automatically Restart Pods When Secrets Are Updated In this section, we will discuss how to automatically restart pods when secrets are updated. This is an important requirement for most teams and companies. We will explore different approaches to achieve this. If you are using a standard Helm chart without helm-secrets or the External Secrets Operator, you can use a hack to ensure that a Deployment's annotation section is updated when the secrets.yaml file changes. This can be done by using the sha256sum function. Here's an example: kind: Deployment spec: template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum } [...] This would work, and based on my experience, a lot of teams and companies use this in production, but in my eyes, this isn't ideal. For starters, it looks a bit messy in the annotations section. In the case of helm-secrets, if the values are updated, we must do another helm secrets upgrade, since the values are either part of the Helm chart or injected in run time. This could be a little bit redundant, because what if neither the app nor the chart is updated, and only a secret value is updated? A full-on Helm upgrade seems a bit much. Using the External Secrets Operator with Helm is even trickier: there is no secrets.yaml anymore, and the ExternalSecret only refers to secrets managers in the cloud, meaning the externalsecret.yaml doesn't change even if the values are updated in secrets managers, so we can't even use the checksum function. To address these challenges, we recommend using Reloader. Reloader can watch for changes in secrets and ConfigMaps, and perform rolling upgrades on pods associated with DeploymentConfigs, Deployments, Daemonsets, Statefulsets, and Rollouts. To install Reloader, simply run: helm repo add stakater https://stakater.github.io/stakater-charts helm repo update helm install stakater/reloader # For helm3 add --generate-name flag or set the release name Once Reloader is installed, you can configure it to automatically discover Deployments where specific config maps or secrets are used. To do this, add the reloader.stakater.com/auto annotation to the main metadata of your Deployment as part of the Helm chart template: kind: Deployment metadata: annotations: reloader.stakater.com/auto: "true" spec: template: metadata: [...] This will discover Deployments automatically where a configmap or a secret is being used, and it will perform rolling upgrades on related pods when either is updated. Combined with External Secrets Operator, everything is solved without untidy hacks like the checksum annotation! For more detailed usage of Reloader, check out the official doc here. Summary Learning to use helm-secrets can be challenging due to its integration with SOPS and the various encryption options it offers. Additionally, integrating helm-secrets with CD tools can result in increased operational overhead. However, if you only need to securely store sensitive data in the values file and do not plan on using a cloud secrets manager, helm-secrets is a suitable choice. The Vault Secrets Operator and External Secrets Operator function similarly, but the Vault Secrets Operator, designed specifically for HashiCorp Vault, offers additional features such as dynamic secret generation. For most users, the External Secrets Operator is likely the better choice as it is compatible with major public cloud providers' secrets managers and seamlessly integrates with CD tools and the Reloader tool. This conclusion is supported by the higher number of GitHub stars for the External Secrets Operator (3k stars) compared to the Vault Secrets Operator (0.3k stars) and helm-secrets (1k stars). Of course, your own experience may differ, depending on your preferences. To make things easier, we created a workflow helping you choose among these solutions: Secret in Helm: what's the best solution?
APIs have fast become a fundamental building block of modern software development. They fuel a vast range of technological advancements and innovations across all sectors. APIs are crucial to app development, the Internet of Things (IoT), e-commerce, digital financial services, software development, and more. Without APIs, the Internet as we know it would not exist. APIs, or Application Programming Interfaces, are rules and protocols that allow different software applications to communicate and interact with each other. They define the methods and data structures developers can use to access specific functionalities or data from a service or platform. APIs enable developers to create applications that can leverage the capabilities of other software systems without needing to understand the internal workings of those systems. Cybercriminals have capitalized on the Internet and the economy's reliance on APIs. Some of the most damaging breaches of the past decade have resulted from an API attack; take Equifax, Twitter, and Optus, for example. As such, the importance of API security has risen to prominence in the past few years. This article will outline and explain the foundations of API security. Authentication and Authorization Authentication verifies the identity of users or applications attempting to access an API. It ensures that only authorized entities have access; this is particularly important considering that 78% of attacks come from seemingly legitimate users who have maliciously achieved the proper authentication. Security teams can achieve effective authentication through various mechanisms such as API keys, tokens (OAuth, JWT), and certificates. Authorization, conversely, determines the permissions and level of access granted to authenticated users. Role-based access control (RBAC) and attribute-based access control (ABAC) are common authorization approaches. Data Privacy and Confidentiality Data exchanged through APIs may include sensitive information. API security ensures that data is encrypted both in transit and at rest. Transport Layer Security (TLS) encryption secures data during transmission, while encryption mechanisms, like database encryption, protect data at rest. Privacy considerations such as data masking and tokenization help prevent the exposure of sensitive data even within authorized requests. Input Validation and Output Sanitization API security involves validating and sanitizing input data to prevent injection attacks like SQL injection and cross-site scripting (XSS). Input validation ensures data matches the expected format, while output sanitization prevents malicious code from being injected into responses. Properly validated and sanitized input and output help mitigate various security vulnerabilities. Threat Detection and Prevention API security solutions often include mechanisms for detecting and preventing attacks, anomalies, and malicious behavior. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) can monitor API traffic and identify patterns indicative of attacks. Web application firewalls (WAFs) provide an additional layer of protection by filtering and blocking malicious requests. Rate Limiting and Throttling API security includes rate limiting and throttling mechanisms to prevent abuse and overuse of API resources. Rate limiting restricts the number of requests an entity can make within a specific timeframe; throttling limits the speed at which requests are processed. These measures prevent DDoS attacks, ensure fair usage, and maintain system performance. Logging and Monitoring Comprehensive logging and monitoring are vital for detecting and investigating security incidents. API security solutions should log detailed information about API requests and responses, including metadata, user agents, IP addresses, and timestamps. Advanced monitoring systems can analyze real-time logs to identify suspicious activities or deviations from normal behavior. Secure Coding Practices API security begins with secure coding practices during development. Developers should follow security guidelines, perform code reviews, and utilize security tools to identify vulnerabilities early in the development lifecycle. Employing coding practices that avoid common security pitfalls, such as buffer overflows and insecure deserialization, is essential. Vulnerability Management and Patching Regular vulnerability assessments and patch management are crucial to API security. Vulnerability scanning tools can identify known vulnerabilities in APIs and related components. Once vulnerabilities are discovered, security teams should promptly apply patches or updates to prevent exploitation. API Lifecycle Management API security considerations extend throughout the entire API lifecycle, from design and development to deployment and decommissioning. Security should be integrated into every phase of the API lifecycle, including design reviews, security testing, and secure deployment practices. Education and Training Raising awareness and providing training to developers, administrators, and users is a vital pillar of API security. Understanding common security threats, best practices, and how to use security features effectively can significantly enhance the security posture of APIs and the systems they interact with. These pillars of API security create a multi-faceted approach encompassing authentication, authorization, data privacy, threat detection, secure coding, and more. This comprehensive strategy is essential to safeguard APIs and the systems they connect against a constantly evolving landscape of security threats. By addressing these pillars, organizations can build robust and secure APIs that contribute to their software ecosystems' overall security and integrity.
In our rapidly evolving digital age where technology underpins almost every facet of our lives, cybersecurity has never been more critical. As the world becomes increasingly interconnected with personal devices and social networks to critical infrastructure and global business operations, the digital landscape has expanded and diversified, presenting new opportunities and unknown threats, bringing cybersecurity to the forefront of the global stage. Today, data is the lifeblood of our modern society, and it flows ceaselessly through the veins of the digital realm. It's the engine that powers our businesses, our governments, and our personal lives. We entrust it with our most sensitive information, from financial records to healthcare data and private conversations. The interconnected world has created unparalleled convenience but has also introduced unprecedented risk. The digital realm has become a battleground where malicious actors, from cyber criminals to nation-states, continually seek to exploit vulnerabilities, steal sensitive information, disrupt critical services, and sow chaos. The consequences of a cybersecurity breach are far-reaching and can include financial loss, reputation damage, and even threats to national security. Cyber threats’ sheer volume and sophistication continue to increase daily, necessitating a proactive and comprehensive approach to safety. "Zero Trust Security" has become an effective strategy to safeguard our interconnected world. What Is Zero Trust Security? Zero Trust Security is a cybersecurity framework and philosophy that challenges the traditional trust model within a network. In the past, network security often relied on a "perimeter-based" approach, assuming that threats were primarily external. This model created trust within the network where users and devices were usually granted broad access to resources once inside. Zero Trust Security, on the other hand, operates under the assumption that no entity—whether inside or outside the network—can be trusted implicitly. It advocates continuous verification and strict access controls to protect critical assets and data. The concept of Zero Trust Security has evolved in response to the changing cybersecurity landscape, characterized by an increasing number of data breaches, advanced persistent threats, and insider threats. Forrester Research analyst John Kindervag initially introduced the Zero Trust model in 2010. Kindervag's vision directly responded to the shortcomings of traditional security models, particularly the outdated "castle-and-moat" approach, which assumed that an organization's perimeter was secure and anyone inside it could be trusted. He recognized that this approach was no longer tenable in the face of increasingly sophisticated cyber threats. The Zero Trust model, conceived by Kindervag, represented a radical shift. It advocated a fundamental change in thinking, emphasizing that trust should not be automatically granted to anyone or anything, whether inside or outside the network. Instead, trust should be continuously earned through rigorous identity verification and monitoring. The term "Zero Trust" may sound extreme, but it reflects the need for a profound shift in cybersecurity mindset. The approach gained traction as significant data breaches and security incidents highlighted the weaknesses of traditional models, and organizations began to realize the importance of a more proactive and adaptive security stance. Key Principles of Zero Trust Security Zero Trust Security is built on several fundamental principles crucial to its effectiveness. These principles collectively help organizations create a security framework that minimizes risk and provides a robust defense against cyber threats; below are some of the fundamental principles in detail: Verify Identity The foundation of Zero Trust Security is identity verification. Users and devices must prove their identity before accessing network resources, typically achieved through multi-factor authentication (MFA), strong passwords, and other authentication methods. By verifying identities, organizations ensure that only authorized individuals or devices can access critical systems and data. Least Privilege Access The principle of least privilege dictates that users and devices should only have access to the minimum set of resources and data required to perform their specific tasks; this limits the potential damage if an account or device is compromised. By adhering to the principle of least privilege, organizations reduce the attack surface and the potential impact of security breaches. Micro-Segmentation Micro-segmentation is dividing a network into small, isolated segments or zones. Each component has its own access controls and security policies. This approach minimizes lateral movement within the network, making it more challenging for cybercriminals to move freely if they breach one segment. Micro-segmentation helps contain and isolate security incidents. Continuous Monitoring Continuous monitoring involves real-time network traffic analysis, user behavior, and device activity. It allows organizations to detect anomalies and potential security threats as they happen. Organizations can respond promptly to emerging threats by continuously monitoring the network, preventing or minimizing damage. Explicit Access Controls Access to network resources should not be granted by default; it should be explicitly defined based on well-defined policies. Access controls must be precise and consistently enforced, and changes or escalations in access permissions should be carefully reviewed and authorized. Explicit access controls ensure that only authorized actions are allowed. By adhering to these core principles, organizations can establish a security framework rooted in "never trust, always verify." This approach makes it much more difficult for cyber attackers to compromise systems, move laterally within the network, and gain access to sensitive data. It also aligns with the evolving threat landscape, where breaches can come from both external and internal sources. Zero Trust Security provides a proactive and adaptive strategy to protect against security threats. Benefits of Zero Trust Security Implementing a Zero Trust Security model offers numerous advantages, significantly enhancing an organization's cybersecurity posture. Here are the key benefits: Improved Security Posture Zero Trust Security provides a proactive and robust defense against evolving cyber threats. Organizations can better protect their digital assets and data by constantly verifying identities and reducing the attack surface through the principle of least privilege. Enhanced Protection Against Data Breaches Data breaches often result from unauthorized access to sensitive information. Zero Trust Security minimizes the risk of data breaches by ensuring that only authorized users and devices can access sensitive data. Because of micro-segmentation and least privilege access, potential damage is minimal, even if a breach occurs. Support for Remote and Hybrid Work Environments Post-pandemic, the modern workplace is increasingly remote and hybrid, with employees accessing company resources from various locations and devices. Zero Trust Security is well-suited for this environment, as it enforces strict access controls regardless of the user's location, device, or network. Regulatory Compliance Organizations are subject to stringent regulatory requirements concerning data security and privacy depending on which domains they operate, such as healthcare, banking, insurance, etc. Zero Trust Security helps organizations comply with these regulations by ensuring that sensitive data is accessed and handled according to the prescribed rules. Threat Detection and Response Continuous monitoring and real-time network traffic analysis enable organizations to identify and respond to security incidents as they happen. This rapid detection and response capability helps minimize the impact of security breaches. Reduced Insider Threats Whether intentional or accidental, insider threats pose a significant risk to organizations. Zero Trust Security mitigates this risk by applying the same strict access controls to all users and devices, regardless of their status within the organization. Adaptability To Evolving Threats The cybersecurity landscape constantly evolves, with new threats and vulnerabilities emerging regularly. Zero Trust Security is adaptable and can grow with the changing threat landscape, ensuring organizations remain resilient against emerging threats. Minimized Attack Surface Zero Trust Security minimizes the attack surface by segmenting the network and applying the least privileged access, making it more challenging for attackers to move laterally within the network and access critical systems. Improved User Experience Zero Trust Security enforces strict access controls but strives to maintain a positive user experience. Users are granted access to the needed resources, and authentication can be streamlined through single sign-on (SSO) and MFA solutions. Zero Trust Security is an essential risk mitigation strategy, reducing the likelihood of costly security incidents and breaches. This can lead to lower financial losses, improved reputation, and better business continuity. Zero Trust Security is a comprehensive approach to cybersecurity that offers a wide range of benefits, from improved protection against cyber threats to regulatory compliance and adaptability in the face of evolving security challenges. Organizations that adopt Zero Trust Security are better positioned to secure their digital assets and data in an interconnected and increasingly complex world. Implementing Zero Trust Security Adopting a Zero Trust Security model involves a series of steps and considerations. Organizations should begin by comprehensively assessing their network architecture, security policies, and data flows. The next step is to identify all physical and digital assets and all users and devices that interact with the network. Understanding organizations' existing security controls and policies is crucial in defining and documenting access control policies. Documentation on who should have access to which resources and under what circumstances is a critical consideration, ensuring access is based on the principle of least privilege, granting only the adequate access required for each user or device to perform their tasks. Essential consideration should be focused on implementing robust authentication methods, including multi-factor authentication (MFA), requiring users and devices to provide multiple forms of identification to verify their identity. This step is crucial for ensuring that only authorized entities gain access. The best practice is to divide the network into smaller, isolated segments, using access controls based on the principle of least privilege, which limits the movement of adversaries within the network segmentation and requires explicit authorization of communications. Implementing continuous monitoring tools to monitor network traffic, user behaviors, and device activities will help detect early threat detection and anomaly patterns in real-time. Organizations should invest in robust Identity and Access Management [IAM] solutions that help efficiently manage user identities, access policies, and authentication processes. IAM systems are critical in Zero Trust Security to secure endpoints, including laptops, smartphones, and IoT devices. Implementing endpoint security will provide real-time threat detection and response capabilities that will guarantee that devices are secured from malicious actors. Data encryption at rest and in transit, data loss prevention (DLP), and data classification mechanisms to safeguard sensitive information are best practices to safeguard an organization’s digital assets. An organization’s detailed incident response plan that outlines how it will react to security incidents or breaches is critical in the Zero Trust Model, ensuring all stakeholders understand their roles and responsibilities in the event of an incident. Organizations often need to evaluate the security of third-party vendors and service providers with whom they share data or collaborate. They need to ensure those external identities adhere to Zero Trust Security principles to prevent vulnerabilities in the network through external connections. Consider a pilot implementation before rolling out Zero Trust Security across your organization. Start with a small, well-defined segment of the network to test the security controls' effectiveness and ensure that they align with the organization's requirements and goals. Once the pilot is successful, gradually expand the Zero Trust Security model to cover the entire organization. Monitor, assess, and refine Zero Trust Security framework to adapt to evolving threats and organizational changes, and regularly review and audit security policies and controls to ensure they comply with relevant regulations and industry standards. Educating employees and users about the principles of Zero Trust Security and the importance of responsible and secure digital practices is essential for the success of this security model. Implementing Zero Trust Security is a journey that requires careful planning, commitment, and adaptability. Following the above-mentioned guidelines and continuously improving security measures can significantly enhance an organization's resilience to the ever-evolving threat landscape. Zero Trust Security provides a proactive and robust approach to safeguarding digital assets and data in an interconnected world. Figure 1: Microsoft’s internal Zero Trust architecture Case Studies of Successful Zero Trust Security Implementation To illustrate the effectiveness of Zero Trust Security, let's explore a few case studies of organizations that have successfully adopted this security model. These case studies highlight how organizations with diverse backgrounds and security needs have successfully implemented Zero Trust Security to overcome challenges and gain significant benefits. They have seen improvements in data protection, security resilience, user experience, and the ability to adapt to a changing threat landscape. These examples underscore the flexibility and effectiveness of Zero Trust Security as a modern approach to cybersecurity. Google With its vast infrastructure and diverse user base, Google faced the challenge of securing its cloud-based services and protecting user data from various threats. It needed a solution that would work seamlessly with its distributed environment while preventing unauthorized access and lateral movement within the network. Google developed the BeyondCorp framework, which is a Zero Trust Security model. It shifted its security focus from the network perimeter to the user and device identity. BeyondCorp ensures that every access request is subject to rigorous identity verification and access controls. Google has experienced improved security, reduced risks, and enhanced user experiences with secure remote access, and it has shared its findings and model with the cybersecurity community. Akamai Akamai has developed a Zero Trust security strategy to eliminate traditional corporate VPNs and move away from the perimeter-based security model. The objective was to keep Akamai's business applications and data secure, ensuring lateral movement in the corporate network is prevented providing a better user experience. As part of the Zero Trust transformation, Akamai established a core set of principles. For example, in the transition to a perimeter-less environment where the Internet becomes the corporate network, every office must become a Wi-Fi hotspot, and application access is dynamically and contextually granted based on identity, environmental factors (such as location and time of day), and device signals (such as client-side certificates or device compliance to corporate security policy). Security guidelines align with Zero Trust's tenets; no machine or user would be trusted by default. This approach was based on finding cost-effective technologies that support mobility, enhanced security, flexible access, and virtualization, also taking advantage of the simplicity of the cloud. Federal Deposit Insurance Corporation (FDIC) The FDIC safeguards sensitive financial information. The organization faced challenges in protecting this data from external cyber threats and insider risks. FDIC adopted a Zero Trust Security approach to protect sensitive financial data and maintain the trust of the public and financial institutions. The organization has improved its data security and resilience to cyberattacks. Zero Trust Security helps the FDIC continuously monitor its network for potential threats, ensuring the security of the financial sector. The Future of Zero Trust Security The cybersecurity landscape continually evolves, driven by technological advancements, new threats, and changing user behaviors. Zero Trust Security, as a concept and practice, is expected to adapt and evolve to meet these challenges. As organizations continue to recognize the importance of robust cybersecurity measures, the adoption of Zero Trust Security is expected to rise. This trend will be driven by a growing awareness of the limitations of traditional security models and the need for a more adaptive approach to protect digital assets. The future of Zero Trust Security will involve the convergence of various technologies, such as identity and access management (IAM), artificial intelligence (AI), and machine learning (ML). These technologies will improve identity verification, anomaly detection, and automated responses to security incidents. Zero Trust Security will become more cloud-native with the shift to cloud-based services and remote work. Organizations will implement cloud security strategies that seamlessly integrate with Zero Trust principles, allowing secure access to cloud resources and applications. The proliferation of Internet of Things (IoT) devices presents a unique challenge. Zero Trust Security must adapt to accommodate IoT devices’ diverse and often resource-constrained nature while ensuring secure network integration. Zero Trust Security will become even more user-centric, focusing on securing the identity and actions of individual users, regardless of their location or the device they use. User behavior analytics and contextual awareness will be pivotal in risk assessment and access control. Automation will play a significant role in Zero Trust Security, allowing organizations to respond quickly to threats. Automated threat detection, incident response, and policy enforcement will help reduce the burden on security teams and improve response times. The concept of Zero Trust Security is inherently dynamic. It will continue to evolve to address emerging threats and vulnerabilities. As threat actors adapt and develop new tactics, Zero Trust Security must remain adaptable and responsive. With the introduction of more stringent data protection regulations, such as GDPR and CCPA, Zero Trust Security will be vital for organizations aiming to maintain regulatory compliance. The model's emphasis on data protection and access control aligns well with these requirements. Organizations must invest in educating and training their employees to implement Zero Trust Security successfully. Cybersecurity awareness programs will become integral to ensuring that all users understand their role in maintaining security. As the adoption of Zero Trust Security grows, collaboration within the cybersecurity community will be essential. Sharing best practices, threat intelligence, and insights will help organizations strengthen their security postures. Zero Trust Security is characterized by adaptability, technology integration, and a heightened focus on identity and data protection. As the threat landscape evolves, organizations must embrace Zero Trust Security as a foundational element of their cybersecurity strategy to protect their digital assets and data effectively. Conclusion Zero Trust Security has emerged as a foundational approach to cybersecurity in an interconnected and constantly evolving digital world. This proactive and adaptive model challenges the traditional notion of trust within a network, emphasizing rigorous identity verification and strict access controls. Organizations can significantly enhance their security posture by verifying identity, implementing least privilege access, micro-segmentation, continuous monitoring, and explicit access controls. Zero Trust Security offers numerous benefits, including improved data protection, enhanced security against breaches, and support for remote work environments. Real-world examples from organizations like Google, Akamai, and FDIC demonstrate the effectiveness of Zero Trust Security. These organizations have successfully implemented this model, overcoming challenges and reaping the rewards of improved security and resilience. Zero Trust Security will continue to evolve, adapting to emerging technologies and threats and witnessing increased adoption across various industries. It will become more user-centric, integrate with cloud-native strategies, and leverage automation and AI for enhanced security. As organizations navigate the complex and dynamic cybersecurity landscape, Zero Trust Security remains a valuable tool to secure digital assets, protect sensitive data, and respond effectively to the evolving threat landscape. By embracing the principles of Zero Trust, organizations can build a safer and more resilient digital world.
The rapid growth of the Internet of Things (IoT) has revolutionized the way we connect and interact with devices and systems. However, this surge in connectivity has also introduced new security challenges and vulnerabilities. IoT environments are increasingly becoming targets for cyber threats, making robust security measures essential. Security Information and Event Management (SIEM) systems, such as Splunk and IBM QRadar, have emerged as critical tools in bolstering IoT security. In this article, we delve into the pivotal role that SIEM systems play in monitoring and analyzing security events in IoT ecosystems, ultimately enhancing threat detection and response. The IoT Security Landscape Challenges and Complexities IoT environments are diverse, encompassing a wide array of devices, sensors, and platforms, each with its own set of vulnerabilities. The challenges of securing IoT include: Device diversity: IoT ecosystems comprise devices with varying capabilities and communication protocols, making them difficult to monitor comprehensively. Data volume: The sheer volume of data generated by IoT devices can overwhelm traditional security measures, leading to delays in threat detection. Real-time threats: Many IoT applications require real-time responses to security incidents. Delayed detection can result in significant consequences. Heterogeneous networks: IoT devices often connect to a variety of networks, including local, cloud, and edge networks, increasing the attack surface. The Role of SIEM Systems in IoT Security SIEM systems are designed to aggregate, analyze, and correlate security-related data from various sources across an organization's IT infrastructure. When applied to IoT environments, SIEM systems offer several key benefits: Real-time monitoring: SIEM systems provide continuous monitoring of IoT networks, enabling organizations to detect security incidents as they happen. This real-time visibility is crucial for rapid response. Threat detection: By analyzing security events and logs, SIEM systems can identify suspicious activities and potential threats in IoT ecosystems. This proactive approach helps organizations stay ahead of cyber adversaries. Incident response: SIEM systems facilitate swift incident response by alerting security teams to anomalies and security breaches. They provide valuable context to aid in mitigation efforts. Log management: SIEM systems collect and store logs from IoT devices, allowing organizations to maintain a comprehensive record of security events for auditing and compliance purposes. Splunk: A Leading SIEM Solution for IoT Security Splunk is a renowned SIEM solution known for its powerful capabilities in monitoring and analyzing security events, making it well-suited for IoT security. Key features of Splunk include: Data aggregation: Splunk can collect and aggregate data from various IoT devices and systems, offering a centralized view of the IoT security landscape. Advanced analytics: Splunk's machine-learning capabilities enable it to detect abnormal patterns and potential threats within IoT data streams. Real-time alerts: Splunk can issue real-time alerts when security events or anomalies are detected, allowing for immediate action. Custom dashboards: Splunk allows organizations to create custom dashboards to visualize IoT security data, making it easier for security teams to interpret and respond to events. IBM QRadar: Strengthening IoT Security Posture IBM QRadar is another SIEM solution recognized for its effectiveness in IoT security. It provides the following advantages: Threat intelligence: QRadar integrates threat intelligence feeds to enhance its ability to detect and respond to IoT-specific threats. Behavioral analytics: QRadar leverages behavioral analytics to identify abnormal activities and potential security risks within IoT networks. Incident forensics: QRadar's incident forensics capabilities assist organizations in investigating security incidents within their IoT environments. Compliance management: QRadar offers features for monitoring and ensuring compliance with industry regulations and standards, a critical aspect of IoT security. In the realm of IoT security, SIEM systems such as Splunk and IBM QRadar serve as indispensable guardians. Offering real-time monitoring, advanced threat detection, and swift incident response, they empower organizations to fortify their IoT ecosystems. As the IoT landscape evolves, the integration of these SIEM solutions becomes paramount in ensuring the integrity and security of connected devices and systems. Embracing these tools is a proactive stride toward a safer and more resilient IoT future.
Spring has come out with an OAuth2 solution, and in this article, we will look at the default configuration that comes bundled with the spring-oauth server. Details about how OAuth2.0 works are out of the scope of this article and the audience of this article is expected to have a basic understanding of it. You can find more details on it here. In this and other articles, I will talk more about the technical aspects of the Spring OAuth2.0 solution. Default Configuration Spring OAuth server comes with some default settings. One can customize it by configuring the server/clients in the application.yml config file. The following flows (grant types) are supported: Client Credentials flow (/oauth2/token endpoint) Authorization Code flow (Including PKCE) Resource Owner Credential Flow (Deprecated) Implicit flow (Code flow without code :)) Device Authorization flow Details around the configuration can be found at the GitHub. This is a sample codebase. Regarding default configuration, you can refer to the server configuration here where we can see how configuration can be done to use the default server configuration. AuthorizationServerTest can be referred to see how we can verify different endpoints through functional testing. To make this test run successfully, the OAuth server should be running. To run the server, you can use the AuthorizationServerApplication class using IDE or from the command prompt as well using the command below: Shell mvn spring-boot:run Let's look at the sample client "spring" (you can name it anything you want to) configuration and talk about the significance of each property below. YAML spring: security: oauth2: authorizationserver: client: spring: registration: client-id: "spring-test" #The name to be used in different configurations to refer this client. client-secret: "sMJ1ltm5wxdcOeEJGaE6WdFj9ArR75wkBqUgVE7vwwo=" ##Using D3PasswordEncoder client-authentication-methods: #methods supported to authenticate the client - "client_secret_basic" - "client_secret_post" authorization-grant-types: #The flows this client support - "authorization_code" - "refresh_token" - "client_credentials" redirect-uris: # The Url to be used at the end of successful authentication - "https://127.0.0.1:9443/" - "https://127.0.0.1:9443/login/oauth2/code/spring" post-logout-redirect-uris: - "http://127.0.0.1:8080/" scopes: - "openid" - "profile" - "email" require-authorization-consent: true client-id: A unique ID assigned to the client; it will be used to identify the client and configuration whenever the client makes a call to the authorization server. client-secret: Secret to be used by the client to authenticate itself when making a call to the authorization server client-authentication-methods: Authentication methods that the client uses to authenticate itself client_secret_basic: Basic authentication approach where credentials are provided as headers (httpHeaders -> httpHeaders.setBasicAuth(TEST_CLIENT_ID, TEST_SECRET)) client_secret_post: Authentication by providing credentials in the request body (application/x-www-form-urlencoded) authorization-grant-types: The grant types supported by the client redirect-uris: The redirect-uri(s) that is/are supported by the client and allow redirects that the client can use while starting the authorize_code flow post-logout-redirect-uris: redirect-uri(s) after successful logout from openId scopes: Supported scopes by the client require-authorization-consent: If consent is required during the authorization code flow Some of the default configurations that are good to know and not there by default in different examples are: Access token format by default is self-contained (jwt) and can be configured to opaque (reference) token through configuration. The refresh token TTL is 60min. The access token TTL is 5min. The authorization code TTL is 5min. The consent form is disabled by default. We can override the above default behavior by overriding the configuration in the application.yml file as: YAML spring: security: oauth2: authorizationserver: client: spring-client: require-authorization-consent: true token: access-token-format: reference authorization-code-time-to-live: PT10M access-token-time-to-live: PT10M refresh-token-time-to-live: PT2H Default Security FilterChain Sometimes I think about how the security framework is configured to make it work, so here is what I tried to put together on how the application is configured using different configurations, filters, and properties to make it work. Starting With Application FilterChain Filters(5): ApplicationFilterConfig (characterEncodingFilter) ApplicationFilterConfig (formContentFilter) ApplicationFilterConfig (requestContextFilter) ApplicationFilterConfig (springSecurityFilterChain) ApplicationFilterConfig (Tomcat WebsocketFilter-JSR356) ApplicationFilterConfig (springSecurityFilterChain) Application filter config "springSecurityFilterChain" is the main class that holds the filter (Spring Security) instances which are instantiated when a web application is started. The filter instance it (springSecurityFilterChain-ApplicationFilterConfig) holds is DelegatingFilterProxyRegistrationBean. DelegatingFilterProxyRegistrationBean is a ServletContextInitializer; it registers DelegatingFilterProxy and holds the name of the actual delegate. Filter (DelegatingFilterProxyRegistrationBean) [springSecurityFilterChain urls=[/*] order=-100]: Java public class DelegatingFilterProxyRegistrationBean extends AbstractFilterRegistrationBean<DelegatingFilterProxy> implements ApplicationContextAware TargetBeanName: springSecurityFilterChain Filter (DelegatingFilterProxy) DelegatingFilterProxy Spring provided filter implementation, a bridge between the servlet container and Spring's ApplicationContext. TargetBeanName: springSecurityFilterChain Delegate: FilterChainProxy FilterChainProxy This holds the Spring Security filter chain(s). Concerning the OAuth authorization server, there are two security filter chains (DefaultSecurityFilterChain): one for the OAuth endpoint and one for the rest. FilterChains: An overview of the oauth2filter chain DefaultSecurityFilterChain (OAuth2 endpoints) RequestMatcher (OAuth2AuthorizationServerConfigurer) Plain Text Or [ OAuth2ClientuthenticationConfigurer Or [ Ant [pattern='/oauth2/token', POST], Ant [pattern='/oauth2/introspect', POST], Ant [pattern='/oauth2/revoke', POST], Ant [pattern='/oauth2/device_authorization', POST] ], OAuth2AuthorizationServerMetadataEndpointConfigurer Ant [pattern='/.well-known/oauth-authorization-server', GET], OAuth2AuthorizationEndpointConfigurer Or [ Ant [pattern='/oauth2/authorize', GET], Ant [pattern='/oauth2/authorize', POST] ], OAuth2TokenEndpointConfigurer Ant [pattern='/oauth2/token', POST], OAuth2TokenIntrospectionEndpointConfigurer Ant [pattern='/oauth2/introspect', POST], OAuth2TokenRevocationEndpointConfigurer Ant [pattern='/oauth2/revoke', POST], OAuth2DeviceAuthorizationEndpointConfigurer Ant [pattern='/oauth2/device_authorization', POST], OAuth2DeviceVerificationEndpointConfigurer Or [ Ant [pattern='/oauth2/device_verification', GET], Ant [pattern='/oauth2/device_verification', POST] ], OidcConfigurer Or [ OidcProviderConfigurationEndpointConfigurer Ant [pattern='/.well-known/openid-configuration', GET], OidcLogoutEndpointConfigurer Or [ Ant [pattern='/connect/logout', GET], Ant [pattern='/connect/logout', POST] ], OidcUserInfoEndpointConfigurer Or [ Ant [pattern='/userinfo', GET], Ant [pattern='/userinfo', POST]] ], NimbusJwkSetEndpointFilter Ant [pattern='/oauth2/jwks', GET] ] Filters (25) Plain Text 0 = {DisableEncodeUrlFilter} 1 = {WebAsyncManagerIntegrationFilter} 2 = {SecurityContextHolderFilter} 3 = {AuthorizationServerContextFilter} 4 = {HeaderWriterFilter} 5 = {CsrfFilter} 6 = {OidcLogoutEndpointFilter} 7 = {LogoutFilter} 8 = {OAuth2AuthorizationServerMetadataEndpointFilter} 9 = {OAuth2AuthorizationEndpointFilter} 10 = {OAuth2DeviceVerificationEndpointFilter} 11 = {OidcProviderConfigurationEndpointFilter} 12 = {NimbusJwkSetEndpointFilter} 13 = {OAuth2ClientAuthenticationFilter} 14 = {BearerTokenAuthenticationFilter} 15 = {RequestCacheAwareFilter} 16 = {SecurityContextHolderAwareRequestFilter} 17 = {AnonymousAuthenticationFilter} 18 = {ExceptionTranslationFilter} 19 = {AuthorizationFilter} 20 = {OAuth2TokenEndpointFilter} 21 = {OAuth2TokenIntrospectionEndpointFilter} 22 = {OAuth2TokenRevocationEndpointFilter} 23 = {OAuth2DeviceAuthorizationEndpointFilter} 24 = {OidcUserInfoEndpointFilter} DefaultSecurityFilterChain (other endpoints) RequestMatcher (AnyRequestMatcher) Filters (14) Plain Text 0 = {DisableEncodeUrlFilter@8893} 1 = {WebAsyncManagerIntegrationFilter@8894} 2 = {SecurityContextHolderFilter@8895} 3 = {HeaderWriterFilter@8896} 4 = {CsrfFilter@8897} 5 = {LogoutFilter@8898} 6 = {UsernamePasswordAuthenticationFilter@8899} 7 = {DefaultLoginPageGeneratingFilter@8900} 8 = {DefaultLogoutPageGeneratingFilter@8901} 9 = {RequestCacheAwareFilter@8902} 10 = {SecurityContextHolderAwareRequestFilter@8903} 11 = {AnonymousAuthenticationFilter@8904} 12 = {ExceptionTranslationFilter@8905} 13 = {AuthorizationFilter@8906} Default Response Token Endpoint By default, the token response for the /oauth2/token endpoint will be: JSON { "access_token":"eyJraWQiOiJiOTM0NjIyMy00ZWJiLTQyZjItYTAyYy1hNDlkNDQwOWRlMjEiLCJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjYwNjAiLCJzdWIiOiJzcHJpbmctdGVzdCIsImF1ZCI6InNwcmluZy10ZXN0IiwibmJmIjoxNjk3MTgzODU2LCJleHAiOjE2OTcxODQxNTYsImlhdCI6MTY5NzE4Mzg1Nn0.KzYvm4YAuLRvpF9eco-z1ESbYU-MCChvxbdEPuGgQN-8seco8MgLWWoGM4dbbMRBJLe3Rv3YAEGhJ9qqenNtpFmVnysAUFqw_S8GEUpPlXzzRTnV_qoeqY9YVazCn9TonJJkjzj_RATTHgDx4TD6ZXSP963L5fwNjLtQ2Cp_yoi5R8WDgMkpvOubmuhjAxYpRH7rBH3qzNWo3vqRPuWreeoyaRyK-9HNOTKxT3vj7aLkTK1wyxzfPxliXXXMJ4IsxjxUOTfzzfHF9qmOYZCoCCgVtNlopsSKmjJKRID8UVugzmYQx1pZkUSPMOxiz1AloEX1-6DhgoC3lMi0-Ez6pQ", "token_type":"Bearer", "expires_in":299 } If you parse the access_token using https://jwt.io/, you can view the claims issued in the token. The default set of claims are: JSON { "iss": "http://localhost:6060", "sub": "spring-test", "aud": "spring-test", "nbf": 1697183856, "exp": 1697184156, "iat": 1697183856 } Metadata Endpoints 1. http://localhost:6060/.well-known/oauth-authorization-server 2. http://localhost:6060/.well-known/openid-configuration Metadata endpoints provide the details around the authorization-server and openId current configuration and endpoints exposed. Code Flow Token response (/oauth2/token) JSON { "access_token":"eyJraWQiOiI4M2ZiMmRhYy1hZGNlLTRkNzgtODlhYy0wOGQzM2U3OGRmNGMiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyMSIsImF1ZCI6InNwcmluZy10ZXN0IiwibmJmIjoxNjk4MTUxODMzLCJzY29wZSI6WyJvcGVuaWQiLCJwcm9maWxlIl0sImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6NjA2MCIsImV4cCI6MTY5ODE1MjEzMywiaWF0IjoxNjk4MTUxODMzfQ.jbNg1MyrL-9kHpfhhkarNfSq1VuS3fPJUZyXjSaliuaziKZzSrma2OyUtVrrPJYzv7FMk-pGrTZVJLZ8f6Jayq2IbHkuWl2XYexRRQmUUDSeC3WMxDhWqezqRc-AEyrTQXm2d0HNs0zdJX9H28bSpGg_SADuKuN-vLuFp3_5w2utveuYxq1e2Ts-IXE-9ulf9O19Mj0Wf9hgENTOZiKbqUWvvoZwXhsx4LzPXqGKM0MbZTS6kFpdSZIgzcbaPzcMX_Vq_B2AU9_UAlJua2Vzxh-9rdJ7SPDVxT-ezoUGp61c1s5eDop2zNszjDqd7kE4qepCiJy6bUuwvP7yewdreg", "refresh_token":"ARM4_nA8LFzFajbTOzJjN1OTGByZAFu9HGoDeZ9mfciY9vEv5XbWc7MuzcQnAArZMMnB_ydsCxsLRC4HY4u0oh9DscHySysYPXb1BE-7JBwcdH_hVKM3pXWmO4NEiDY", "scope":"openid profile", "id_token":"eyJraWQiOiI4M2ZiMmRhYy1hZGNlLTRkNzgtODlhYy0wOGQzM2U3OGRmNGMiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyMSIsImF1ZCI6InNwcmluZy10ZXN0IiwiYXpwIjoic3ByaW5nLXRlc3QiLCJhdXRoX3RpbWUiOjE2OTgxNTE4MzMsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6NjA2MCIsImV4cCI6MTY5ODE1MzYzMywiaWF0IjoxNjk4MTUxODMzLCJzaWQiOiJfVmYzY1ZTREd1UDQtMEN6czNzR1BxQTZkaUk1ZjB1TE1pT1BkUzd3Z0c4In0.QDHOUa2p8RZKHVuhnHUsvMX-HmEvsGXXQ6QgfidXEMO0vDxJilmYIWW90z9Etc2cJ1SjfFk4OrUZWQF2foa2secatuAeffTbUx_9lTPD4KT_xzg9SsP69tHt55J2U35FcFef2WHuGF06MOj2hr6dVqlk8B5ORV0z_XiBM9FBEmnraLvXWtXtlwp_-jGA95O7y2U8SZt9H8wns-IpatXshB8lnUk-P5NjV8-CUwqtb9FHKOr9ie4KSXHQ8IpY2FaBMI0nA4E_hCUV2xpP_nBAb7Prh5EDYoCFkjHtO5ZXe-VYhyff9AydPzFsdSmEeF6BEK6SeJPBXRUvtL_bZykjdA", "token_type":"Bearer", "expires_in":299 } In the code flow at the end of the flow when code is issued to the client at redirect_uri, the backend service will collect the code and call the token endpoint /oauth2/token. Introspection Response When the Token Is Active JSON { "active": true, "client_id": "spring-test", "iat": 1698151833, "exp": 1698155433 } Introspection Response Post Revoke JSON { "active": false, "iat": 0, "exp": 0 } I hope you found this article informative. I will put more details in another post on how to customize the token response by adding new claims to JWT.
In a world where the click of a mouse can be as powerful as a nuclear button, the evolution of cyber threats has taken a sinister turn. What was once a digital nuisance in the form of ransomware has now transformed into a geopolitical nightmare, the rise of ransom nations. This transformation signifies a dangerous escalation of state-sponsored cyberattacks, where entire nations harness the power of code as a weapon. This is an unsettling journey from ransomware to what can only be described as "ransom nations." Concept of Ransomware Imagine a digital hostage situation. Ransomware is precisely that. It's malicious software that sneaks into your computer or network, encrypts your valuable files, and holds them hostage. To decrypt your files, you're asked to pay a hefty ransom, usually in cryptocurrencies like Bitcoin. You might wonder why anyone would fall for such a scheme. The truth is ransomware is cleverly designed to exploit vulnerabilities. It often sneaks into your system through a seemingly innocent email attachment or a fake software update. Once inside, it swiftly encrypts your files, leaving you with a tough choice: pay the ransom or lose your data forever. Ransomware isn't new, but it's gotten nastier over the years. It started as an annoying inconvenience and has evolved into a global threat. Ransomware authors have become more organized, using advanced encryption methods and targeting not only individuals but also large organizations and even municipalities. State-Sponsored Cyberattacks State-sponsored cyberattacks are the digital weapons wielded by governments in an age where lines between physical and virtual warfare blur. Unlike typical cybercrimes, nation-states orchestrate or support these attacks, making them highly sophisticated and organized. The motivations behind state-sponsored cyberattacks are multifaceted. They range from espionage for national security and economic gain to the pursuit of geopolitical influence. Nation-states engage in these attacks to gather intelligence, disrupt adversaries, or advance their strategic interests in a covert manner. Some notable examples of state-sponsored cyberattacks include: Stuxnet (2010): A joint project of the U.S. and Israel, Stuxnet targeted Iran's nuclear facilities, physically damaging its centrifuges. Russian Hacking of DNC (2016): Russian state actors hacked the Democratic National Committee's servers, sparking a major political controversy during the U.S. presidential election. WannaCry Ransomware Attack (2017): Attributed to North Korea, this global attack encrypted computers and demanded ransom payments in Bitcoin. The Evolution of Ransom Nations Once, it was mainly individuals or small groups, hidden behind pseudonyms, launching ransomware attacks for personal profit. However, the landscape has transformed dramatically. Now, we witness nation-states wielding these digital weapons with remarkable sophistication. They've graduated from basement hackers to state-sponsored actors. The nexus between ransomware and state interests is where things get intriguing. State-sponsored cyberattacks are no longer merely about making a quick buck. They're about achieving strategic objectives, from espionage to economic disruption and even political manipulation. Ransomware is now a potent tool in the hands of governments to further their agendas. This evolution didn't happen in a vacuum. It occurred because cybersecurity has risen to the forefront of national security concerns. As nations rely increasingly on digital infrastructure, they become vulnerable. Protecting against cyber threats is not just about preventing data breaches; it's about safeguarding a nation's very foundations. The rise of Ransom Nations has significant geopolitical implications. It reshapes alliances, fuels tensions, and challenges the traditional rules of engagement. Cyber-attacks, once viewed as merely virtual, now have real-world consequences. They can disrupt critical infrastructure, influence elections, and even spark conflicts. Efforts to Combat State-Sponsored Cyberattacks In the ever-expanding realm of cybersecurity, borders blur, and collaboration thrives. Countries unite forces, exchanging intelligence and orchestrating joint strategies to fend off state-sponsored cyber threats. This unified front not only fortifies global defense but also ushers a new era of digital diplomacy. International heavyweights like the United Nations and INTERPOL emerge as linchpins in this battle. Serving as crucibles of discussion, they foster knowledge exchange and harmonize responses. The UN, for instance, pioneers norms discouraging nations from attacking critical infrastructures. Nations are crafting a digital code of conduct outlining the rules of engagement in cyberspace. These agreements aim to staunch the flow of state-backed cyber assaults, ensuring that the virtual arena mirrors the order of the physical world. Cyber diplomats step into the fray, deftly navigating negotiations to pacify tensions and quell conflicts. Their efforts stand as a wall, preventing digital conflicts from spiraling into global crises. Diplomacy, hence, becomes the shield that safeguards our interconnected world. Cybersecurity Best Practices Against Ransomware The first line of defense against ransomware starts with individuals and organizations adopting robust cybersecurity best practices. This means regularly updating software, using strong, unique passwords, and enabling multi-factor authentication. It's about being vigilant, recognizing phishing attempts, and avoiding suspicious downloads. In essence, it's practicing good cyber hygiene to reduce vulnerabilities. Ransomware can be devastating, but resilience can soften its blow. One key aspect of resilience is data backup. Regularly backing up data to offline or cloud storage ensures that even if you fall victim to ransomware, you can restore your systems without paying a ransom. It's like having a spare key to your digital house. Knowledge is power. Seminars and workshops that tend to educate individuals and employees about the risks of ransomware and the importance of cybersecurity are essential. Awareness campaigns can help people recognize the signs of an attack and respond effectively. When everyone becomes a part of the solution, the entire digital ecosystem becomes more secure. Conclusion The rise of state-sponsored cyberattacks demands our attention and collective action. As we navigate this digital frontier, it is clear that cybersecurity is not just a technological challenge; it's a global imperative. Only through international cooperation, innovation, and unwavering vigilance can we hope to secure our digital future and safeguard against the escalation of these unprecedented threats.
Deployed by more than 60% of organizations worldwide, Kubernetes (K8s) is the most widely adopted container-orchestration system in cloud computing. K8s clusters have emerged as the preferred solution for practitioners looking to orchestrate containerized applications effectively, so these clusters often contain various software, services, and resources, enabling users to deploy and scale applications with relative ease. To support a typical K8s environment operation, a cluster is often granted access to other environments such as artifact repositories, CI/CD environments, databases etc. Thus, K8s clusters can store customer data, financial records, intellectual property, access credentials, secrets, configurations, container images, infrastructure credentials, encryption keys, certificates, and network or service information. With so many clusters containing potentially valuable and lucrative data exposed to the internet, K8s provides a tempting target for threat actors. This risk escalates with the number of organizations that have misconfigurations that leave K8s clusters exposed and vulnerable to attacks. In recent research, Aqua Nautilus discovered that in a period of three months, more than 350 reputable organizations — some in the Fortune 500 — and open-source projects were completely exposed to the world. This exposure was for a period of several days to several months. Exploited by a threat actor, these misconfigurations could have resulted in a severe security breach. In the cases of exposed clusters of open source projects, if exploited by attackers they may result in a supply chain infection vector with implications for millions of users. Aqua Nautilus researchers found that it could take just one Shodan search for an organization’s misconfigured cluster to be identified. What’s at Risk in an Exposed Cluster? Over a three-month period, Aqua Nautilus conducted a series of separate searches using Shodan. From these searches, the team pinpointed just over 350 distinct IP addresses connected to at-risk K8s API servers. At least 60% of them were breached and had an active campaign that deployed malware and backdoors. K8s clusters often contain secrets and unauthenticated access to the API server may enable access to these secrets, so open access to it enables an attacker to take full control over the cluster. Even worse, K8s clusters usually don’t only store their own secrets. In many instances, the K8s cluster is a part of the organization's software development life cycle (SDLC), so it grants access to source code management (SCM), continuous integration/continuous deployment (CI/CD), registries, and the cloud service provider (CSP). The secrets often contain information about internal or external registries. In many cases, developers constructed a configuration file of a registry (such as “.dockerconfig”), containing links to other environments and secrets or credentials (including Docker Hub, Cloud Service Provider, and internally managed ones). Threat actors can use these credentials to expand their reach. They can even poison the registry (if the key allows that) to run malicious code on further systems in the network. Real-World Exposure Examples The team discovered live examples of unsecured K8s API servers containing a wide range of additional secrets associated with various environments. These include SCM environments like GitHub, CI platforms like Jenkins, various registries such as Docker Hub, external database services like Redis or PostgreSQL, and many others. SCM access tokens allow an attacker to access an organization’s code. In some cases, attackers can even modify it to damage the organization (if the key allows that). Some of the misconfigured clusters identified were only accessible for a few hours, but Aqua Nautilus’ data collection tools managed to identify and record the exposed information. This highlights a sobering truth about such misconfigurations; even if promptly detected and corrected, a well-prepared attacker with automation capabilities can still gain access to the K8s cluster at a later stage or infiltrate various other elements of the SDLC. In security, automation is very much a two-way street. With just one, limited instance of secret exposure, threat actors may be able to gain authenticated access to your cluster at will. What Misconfigurations Should Organizations be Aware of? Aqua Nautilus research identified two common misconfigurations widely found in organizations and actively exploited in the wild. 1. Anonymous User With High Privileges When creating a new cluster, there are four things you need to take into consideration: 1) if the API server is exposed to the internet; 2) who is authorized to communicate with the cluster; 3) which privileges they have; and 4) if there are any further access controls. In many cases (both in native K8s environments and even on some cloud providers’ managed clusters), the cluster is set to be open to the internet by default, and there are many practical reasons to do that. But, this also means that if the IP address on which the API server is hosted is scanned, it will reveal that this is a K8s cluster and anyone can try connecting to the cluster.Additionally, in many cases, unauthenticated requests to the K8s cluster are enabled by default. This means that the cluster will receive requests from anyone. However, requests by anonymous users have no privileges, by default. Thus they will result in 403 replies, which forbid access. Lastly, admission controls need someone to proactively create them.The team has seen cases in which practitioners bind the anonymous user role with other roles, often with admin roles, which puts their clusters in danger. A mixture of these misconfigurations can allow attackers to gain unauthorized access to the Kubernetes cluster, potentially compromising all applications running on it as well as other environments.As a result, your organization could be only one YAML away from disaster. A simple misconfiguration, an easy mistake can lead to an exposed cluster. Here’s a real-world example of the mistake: 2. The 'Kubectl Proxy' Command Another misconfiguration that Nautilus observed wasn’t previously seen or published about: the ‘kubectl proxy’ command. When using the ‘kubectl proxy’ command with specific flags. Some publications encourage practitioners to use the ‘kubectl proxy’ command for a number of purposes. For instance, there are tutorials about Kubernetes Dashboard installation that encourage users to run the proxy command and do not explicitly warn about possible implications.When running ‘kubectl proxy’, you're forwarding authorized and authenticated requests to the API server. When running the command ‘kubectl proxy’ with the following flags ‘--address=`0.0.0.0` --accept-hosts `.*`’, the proxy on the workstation will now listen and forward authorized and authenticated requests to the API server from any host that has HTTP access to the workstation, as demonstrated in the image below. Note: the privileges are the same as those of the user who ran the ‘kubectl proxy’ command. The Threat Is Real Aqua Nautilus recorded many attacks in the wild against their honeypots underscoring the threat to exposed K8s clusters. In fact, 60% of the clusters were actively under attack by cryptominers. Three primary examples are the Lchaia/xmrig, the SSWW attack and the Dero campaign. Researchers also discovered the RBAC Buster campaign, which exploits the RBAC to create a very hidden backdoor. Finally, the team reported a novel highly aggressive campaign by TeamTNT (part 1, part 2). In this campaign, TeamTNT is searching for and collecting cloud service providers tokens (AWS, Azure, GCP etc). Tips for Mitigation To avoid the risk of the number of active threat campaigns targeting these misconfigurations, securing cloud resources and protecting your clusters, Nautilus recommends that organizations follow these five steps: Train Employees: Organizations must invest in training their staff about the potential risks, best practices, and correct configurations. This will minimize human errors leading to such misconfigurations. Secure the `kubectl proxy`: Ensure that the `kubectl proxy` is not exposed to the internet. It should be set up within a secure network environment and accessible only by authenticated and authorized users. Use Role-Based Access Control (RBAC): RBAC is a native Kubernetes feature that can limit who can access the Kubernetes API and what permissions they have. Avoid assigning the admin role to an anonymous user. Make sure to assign appropriate permissions to each user and strictly adhere to the principle of least privilege. Implement Admission Control policies: Kubernetes Admission Controllers can intercept requests to the Kubernetes API server before persistence, enabling you to define and enforce policies that bolster security. The team strongly recommends admission controls that prevent binding any role with the anonymous role, which will harden the security posture of your Kubernetes clusters. Audit Regularly: Implement regular auditing of your Kubernetes clusters. This allows you to track each action performed in the cluster, helping in identifying anomalies and taking quick remedial actions. By employing these mitigation strategies, organizations can significantly enhance their Kubernetes security, ensuring that their clusters are safe from common attacks. As Kubernetes grows and is used by more businesses, it is imperative that practitioners become familiar with the risks it could bring to organizations. It is critical to remember that security in K8s is not a static, set-it-and-forget-it situation. There are constant actions and events occurring in the cluster, and just because your cluster was secure yesterday does not mean it remains secure today. Because of this, manually checking your clusters for known misconfigurations is not enough. If your organization uses K8s there is always a risk if you aren’t actively scanning.
Flow is a permissionless layer-1 blockchain built to support the high-scale use cases of games, virtual worlds, and the digital assets that power them. The blockchain was created by the team behind Cryptokitties, Dapper Labs, and NBA Top Shot. One core attribute that differentiates Flow from the other blockchains is its usage of capability-based access control. At a high level, this means instead of the typical model where sets of permissions are given to users through roles, permissions instead are granted by issuing capabilities. These capabilities can be seen as digital keys that unlock specific pieces of functionality, such as access to a specific resource (object or function). Capabilities make it possible to grant users dynamic and fine-grained access. But why is this important for you as a Flow developer? With capabilities, you can define a user’s granular-level access privileges. So if you’re building a music app and want to give the ability to access top playlists to only premium users, you can control that very easily with the built-in capabilities functionality of Flow. Source: NewsBTC What Is Blockchain Authorization? Blockchain authorization is the methodology by which access to information and execution permissions are granted on a blockchain system. Several operations that a user may need to perform within a blockchain may include the following: Creating new smart contracts Updating smart contract code Executing smart contract functions Being a validator node Updating smart contract data If the methodology used to manage the execute permissions of the above operations is not fault-tolerant (or, able to handle errors and unexpected conditions without causing a security breach), then it could be a huge security risk for the application. There are several major types of authorization that have the same objectives but slightly different implementations. Access Control Lists (ACLs) Access Control Lists (ACLs) refer to the authentication mechanism that works on maintaining individual lists for managing access to different objects. ACLs are like guest lists for a resource. Those participants that are on the guest list of a certain object are allowed to access and others are not. Role-Based Access Control (RBAC) Role-based access control refers to the idea of assigning permissions to network participants on the basis of their role within an organization. The access rules are mapped to the roles rather than the individual identities. This is the most common form of authorization used in popular products like AWS. Capability-Based Authorization A capability is an unforgeable token of authority. In capability-based authorization, your identity does not matter. If you receive an access token from the owner/admin that grants you the capability to access a resource, and you are able to execute that capability, then you will have access. At runtime, the application does not check what your identity is but only that you have the capability to access the requested resource. ACLs Versus Capability-Based Authorization There are certain drawbacks to implementing ACLs, especially in the context of decentralization, which we’ll discuss below. Ambient Authority Problem Let’s say that as a user, you have received several different types of access and privileges to an app on your operating system. At some point, you request the app to fetch certain data for you. You would want to make sure that the app fetches only the data that’s absolutely necessary and doesn’t access anything else. However, in the case of ACL systems, there is no way to make sure this happens since the app has “ambient” authority. This can only be solved by using capability-based security systems. Watch this video to learn more. Confused Deputy Problem Let’s say there’s a program A, requesting a program B to perform certain actions. There might be instances where only program B has access to perform some of those actions, but program A does not. Program B still performs them because it didn’t double-check. In this case, program B was tricked into misusing its privileges by program A. This can be solved by using capabilities. Watch this video to learn more. Here’s how the company Tymshare faced this problem 25 years ago. ACL Attack Vector Because ACL lists are usually maintained with a centralized owner, it is prone to malicious updates at any time. Using capabilities takes away the power of performing malicious updates from a centralized owner and hence makes the system secure from the large ACL attack vector. Source: "Capability-based security — enabling secure access control in the decentralized cloud" About Capabilities A capability (also known as the ‘key’) is a hash that designates both the resource and access to it. This is also the model implemented in Bitcoin where “your key is your money”, and in Ethereum where “your key is gas for EVM computations." In the Flow blockchain, “your keys are your data,” and hence, data access is controlled directly by keys instead of identities. By tying access to the key, capability-based models push security to the edge, decentralizing large attack vectors. Capabilities also make it very easy to write code that defines security privileges in a granular fashion. There are two major types of capabilities in Flow blockchain: Public Capabilities Public capabilities are created using public paths and hence have the domain “public." After creation, users can access them with authorized accounts (“AuthAccount”) and public accounts (“PublicAccount”). Private Capabilities Private capabilities are created using private paths and hence have the domain “private." After creation, they can only be accessed by authorized accounts (“AuthAccount”) and not by public accounts (“PublicAccount”). 3 Tenets of Capability-Based Security Encryption-based: Capability-based security always has an unforgeable key to go with a particular access. This means that just the identity of a participant is not enough to get access. Decentralized: Capability-based security is totally decentralized. This means that the success of the security system is not dependent on a single owner. Granular: It is easier to define fine-grained access to data and resources using capability-based security. Creating a Capability in Cadence Cadence is the programming language used to create Flow contracts. Below we’ll talk about the code used to create capabilities in Cadence. Creating Capability Using Link Function The link function of an authorized account (“AuthAccount”) is used to create a capability. Swift fun link<T: &Any>(_ newCapabilityPath: CapabilityPath, target: Path): Capability<T>? newCapabilityPath is the public or private path identifying the capability. target is any public, private, or storage path that leads to the object that will provide the function defined by this capability. T is the type parameter for the capability type. The above function will: Return nil if the link for a given capability path already exists Return the capability link if the link doesn’t already exist Removing Capability Using Unlink Function The unlink function of an authorized account (“AuthAccount”) is used to remove a capability. Swift fun unlink(_ path: CapabilityPath) path is the public or private path of the capability that should be removed. Other Important Functions getLinkTarget This function can be used to get the target path of a capability. Swift fun getLinkTarget(_ path: CapabilityPath): Path? getCapability This function can be used to get the link of existing capabilities. Swift fun getCapability<T>(_ at: CapabilityPath): Capability<T> check This function is used to check if the target currently exists and can be borrowed. Swift fun check<T: &Any>(): Bool borrow This function is used to borrow the capability and get a reference to a stored object. Swift fun borrow<T: &Any>(): T? Code Examples of Creating a Capability With Cadence We will follow a simple example where: Step 1: We will create a smart contract with a function as a resource. Step 2: We will access the resource. Create a capability to access the resource in that smart contract. Create a reference to that capability using the borrow function. Execute the function resource. Step 1: Creating a Car Smart Contract Explanations are in the comments below: Swift pub contract Car { // Declare a resource that only includes one function. pub resource CarAsset { // A transaction can call this function to get the "Honk Honk!" // message from the resource. pub fun honkHorn(): String { return "Honk Horn!" } } // We're going to use the built-in create function to create a new instance // of the Car resource pub fun createCarAsset(): @CarAsset { return <-create CarAsset() } init() { log("Creating CarAsset") } } Step 2: Accessing the honkHorn() Function in the CarAsset Resource of Car Smart Contract The code below includes all three steps: Creating the capability to access the resource from CarAsset Creating a reference by borrowing the capability Executing the HonkHorn function Explanations are in the comments below: Swift import Car from 0x01 // This transaction creates a new capability // for the CarAsset resource in storage // and adds it to the account's public area. // // Other accounts and scripts can use this capability // to create a reference to the private object to be able to // access its fields and call its methods. transaction { prepare(account: AuthAccount) { // Create a public capability by linking the capability to // a `target` object in account storage. // The capability allows access to the object through an // interface defined by the owner. // This does not check if the link is valid or if the target exists. // It just creates the capability. // The capability is created and stored at /public/CarAssetTutorial, and is // also returned from the function. let capability = account.link<&Car.CarAsset>(/public/CarAssetTutorial, target: /storage/CarAssetTutorial) // Use the capability's borrow method to create a new reference // to the object that the capability links to // We use optional chaining "??" to get the value because // result of the borrow could fail, so it is an optional. // If the optional is nil, // the panic will happen with a descriptive error message let CarReference = capability.borrow() ?? panic("Could not borrow a reference to the Car capability") // Call the honkHorn function using the reference // to the CarAsset resource. // log(CarReference.honkHorn()) } } If you execute the above function, you should see the message “Honk Honk!” in your console. Refer to this tutorial to learn how to deploy a contract and execute Cadence code. Conclusion In this article, we learned the types of blockchain authorization and the added advantage of using capabilities over other methods of authorization. We also learned how to create, execute, and transfer capabilities in Cadence, the smart contract language of Flow blockchain. These learnings will help you as a developer to write highly secure code when building on Flow. In essence, capabilities open up a new paradigm of blockchain authorization, making it very easy for developers to define access at a granular level. The fact that Flow blockchain uses this paradigm for data access makes it one of the most secure blockchain options out there. And as you probably noticed, it is super easy to create capabilities in Flow. I hope that you enjoyed this deep dive into capability-based security in Flow blockchain. You can refer to the official docs page on Capabilities to learn more. I also recommend that you get your hands dirty by going through Flow Docs and this tutorial on Capabilities.
There has been a growing focus on the ethical and privacy concerns surrounding advanced language models like ChatGPT and OpenAI GPT technology. These concerns have raised important questions about the potential risks of using such models. However, it is not only these general-purpose language models that warrant attention; specialized tools like code completion assistants also come with their own set of concerns. A year into its launch, GitHub’s code-generation tool Copilot has been used by a million developers, adopted by more than 20,000 organizations, and generated more than three billion lines of code, GitHub said in a blog post. However, since its inception, security concerns have been raised by many about the associated legal risks associated with copyright issues, privacy concerns, and, of course, insecure code suggestions, of which examples abound, including dangerous suggestions to hard-code secrets in code. Extensive security research is currently being conducted to accurately assess the potential risks associated with these newly advertised productivity-enhancing tools. This blog post delves into recent research by Hong Kong University to test the possibility of abusing GitHub’s Copilot and Amazon’s CodeWhisperer to collect secrets that were exposed during the models' training. As highlighted by GitGuardian's 2023 State of Secrets Sprawl, hard-coded secrets are highly pervasive on GitHub, with 10 million new secrets detected in 2022, up 67% from 6 million one year earlier. Given that Copilot is trained on GitHub data, it is concerning that coding assistants can potentially be exploited by malicious actors to reveal real secrets in their code suggestions. Extracting Hard-Coded Credentials To test this hypothesis, the researchers conducted an experiment to build a prompt-building algorithm trying to extract credentials from the LLMs. The conclusion is unambiguous: by constructing 900 prompts from GitHub code snippets, they managed to successfully collect 2,702 hard-coded credentials from Copilot and 129 secrets from CodeWhisper (false positives were filtered out with a special methodology described below). Impressively, among those, at least 200, or 7.4% (respectively 18 and 14%), were real hard-coded secrets they could identify on GitHub. While the researchers refrained from confirming whether these credentials were still active, it suggests that these models could potentially be exploited as an avenue for attack. This would enable the extraction and likely compromise of leaked credentials with a high degree of predictability. The Design of a Prompt Engineering Machine The idea of the study is to see if an attacker could extract secrets by crafting appropriate prompts. To test the odds, the researchers built a prompt testing machine, dubbed the Hard-coded Credential Revealer (HCR). The machine has been designed to maximize the chances of triggering a memorized secret. To do so, it needs to build a strong prompt that will "force" the model to emit the secret. The way to build this prompt is to first look on GitHub for files containing hard-coded secrets using regex patterns. Then, the original hard-coded secret is redacted, and the machine asks the model for code suggestions. Of course, the model will need to be requested many times to have a slight chance of extracting valid credentials, because it often outputs "imaginary" credentials. They also need to test many prompts before finding an operational credential, allowing them to log into a system. In this study, 18 patterns are used to identify code snippets on GitHub, corresponding to 18 different types of secrets (AWS Access Keys, Google OAuth Access Token, GitHub OAuth Access Token, etc.). Although 18 secrets types is far from exhaustive, they are still representative of services widely used by software developers and are easily identifiable. Then, the secrets are removed from the original file, and the code assistant is used to suggest new strings of characters. Those suggestions are then passed through four filters to eliminate a maximum number of false positives. Secrets are discarded if they: Don't match the regex pattern Don't show enough entropy (not random enough, ex: AKIAXXXXXXXXXXXXXXXX) Have a recognizable pattern (ex: AKIA3A3A3A3A3A3A3A3A) Include common words (ex: AKIAIOSFODNN7EXAMPLE) A secret that passes all these tests is considered valid, which means it could realistically be a true secret (hard-coded somewhere else in the training data). Results Among 8,127 suggestions of Copilot, 2,702 valid secrets were successfully extracted. Therefore, the overall valid rate is 2702/8127 = 33.2%, meaning that Copilot generates 2702/900 = 3.0 valid secrets for one prompt on average. CodeWhisperer suggests 736 code snippets in total, among which we identify 129 valid secrets. The valid rate is thus 129/736 = 17.5%. Keep in mind that in this study, a valid secret doesn't mean the secret is real. It means that it successfully passed the filters and, therefore has the properties corresponding to a real secret. So, how can we know if these secrets are genuine operational credentials? The authors explained that they only tried a subset of the valid credentials (test keys like Stripe Test Keys designed for developers to test their programs) for ethical considerations. Instead, the authors are looking for another way to validate the authenticity of the valid credentials collected. They want to assess the memorization, or where the secret appeared on GitHub. The rest of the research focuses on the characteristics of the valid secrets. They look for the secret using GitHub Code Search and differentiate strongly memorized secrets, which are identical to the secret removed in the first place, and weakly memorized secrets, which came from one or multiple other repositories. Finally, there are secrets that could not be located on GitHub and which might come from other sources. Consequences The research paper uncovers a significant privacy risk posed by code completion tools like GitHub Copilot and Amazon CodeWhisperer. The findings indicate that these models not only leak the original secrets present in their training data but also suggest other secrets that were encountered elsewhere in their training corpus. This exposes sensitive information and raises serious privacy concerns. For instance, even if a hard-coded secret was removed from the git history after being leaked by a developer, an attacker can still extract it using the prompting techniques described in the study. The research demonstrates that these models can suggest valid and operational secrets found in their training data. These findings are supported by another recent study conducted by a researcher from Wuhan University, titled Security Weaknesses of Copilot Generated Code in GitHub. The study analyzed 435 code snippets generated by Copilot from GitHub projects and used multiple security scanners to identify vulnerabilities. According to the study, 35.8% of the Copilot-generated code snippets exhibited security weaknesses, regardless of the programming language used. By classifying the identified security issues using Common Weakness Enumerations (CWEs), the researchers found that "Hard-coded credentials" (CWE-798) were present in 1.15% of the code snippets, accounting for 1.5% of the 600 CWEs identified. Mitigations Addressing the privacy attack on LLMs requires mitigation efforts from both programmers and machine learning engineers. To reduce the occurrence of hard-coded credentials, the authors recommend using centralized credential management tools and code scanning to prevent the inclusion of code with hard-coded credentials. During the various stages of code completion model development, different approaches can be adopted: Before pre-training, hard-coded credentials can be excluded from the training data by cleaning it. During training or fine-tuning, algorithmic defenses such as Differential Privacy (DP) can be employed to ensure privacy preservation. DP provides strong guarantees of model privacy. During inference, the model output can be post-processed to filter out secrets. Conclusion This study exposes a significant risk associated with code completion tools like GitHub Copilot and Amazon CodeWhisperer. By crafting prompts and analyzing publicly available code on GitHub, the researchers successfully extracted numerous valid hard-coded secrets from these models. To mitigate this threat, programmers should use centralized credential management tools and code scanning to prevent the inclusion of hard-coded credentials. Machine learning engineers can implement measures such as excluding these credentials from training data, applying privacy preservation techniques like Differential Privacy, and filtering out secrets in the model output during inference. These findings extend beyond Copilot and CodeWhisperer, emphasizing the need for security measures in all neural code completion tools. Developers must take proactive steps to address this issue before releasing their tools. In conclusion, addressing the privacy risks and protecting sensitive information associated with large language models and code completion tools requires collaborative efforts between programmers, machine learning engineers, and tool developers. By implementing the recommended mitigations, such as centralized credential management, code scanning, and exclusion of hard-coded credentials from training data, the privacy risks can be effectively mitigated. It is crucial for all stakeholders to work together to ensure the security and privacy of these tools and the data they handle.
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH