Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test,
deploy, and monitor your applications
Group items matching
in title, tags, annotations or urlDebugging PostgreSQL performance, the hard way · JustWatch Tech Blog - 0 views
http://thebuild.com/presentations/pgconfeu-2016-securing-postgresql.pdf - 0 views
81More
Auto DevOps | GitLab - 0 views
- ...78 more annotations...
-
Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
-
need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
-
Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
-
If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
-
Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
-
Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
-
Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
-
Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
-
Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
-
License Management uses the License Management Docker image to search the project dependencies for their license.
-
Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
-
Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
-
The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
-
Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
-
Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
-
After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
-
Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
-
For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
-
If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
-
If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
-
a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
-
-
-
Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
-
annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
-
If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
-
While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
-
If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
-
Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
-
Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
-
make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
-
specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
-
-
Your additions will be merged with the Auto DevOps template using the behaviour described for include
-
Set up the replica variables using a project variable and scale your application by just redeploying it!
-
Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
-
if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
-
The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
-
If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
-
If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
-
If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
-
The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
-
Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
-
When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
-
Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
-
We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
Improving Linux System Performance with I/O Scheduler Tuning | via @codeship - 0 views
31More
The Rails Command Line - Ruby on Rails Guides - 0 views
- ...28 more annotations...
-
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
-
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
-
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
-
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
-
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
-
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
-
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
-
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
-
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
11More
What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views
- ...8 more annotations...
-
Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
-
Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
-
Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
-
Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
8More
Ansible Tower vs Ansible AWX for Automation - 4sysops - 0 views
-
you can run Ansible freely by downloading the module and running configurations and playbooks from the command line.
-
AWX Project from Red Hat. It provides an open-source version of Ansible Tower that may suit the needs of Tower functionality in many environments.
-
Ansible Tower may be the more familiar option for Ansible users as it is the commercial GUI Ansible tool that provides the officially supported GUI interface, API access, role-based access, scheduling, notifications, and other nice features that allow businesses to manage environments easily with Ansible.
- ...5 more annotations...
-
Ansible AWX is the open-sourced project that was the foundation on which Ansible Tower was created. With this being said, Ansible AWX is a development branch of code that only undergoes minimal testing and quality engineering testing.
-
Ansible AWX is a powerful open-source, freely available project for testing or using Ansible AWX in a lab, development, or other POC environment.
39More
MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views
-
globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
- ...35 more annotations...
-
globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
-
Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
-
When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
-
Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
-
mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
-
If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
-
If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
-
The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
-
wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
-
wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
-
we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
-
check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
-
a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
-
use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
-
shows how far behind the secondary members are from the primary. This number should be as low as possible.
11More
Running rootless Podman as a non-root user | Enable Sysadmin - 0 views
-
the processes in the container have the default list of namespaced capabilities which allow the processes to act like root inside of the user namespace
-
the directory is owned by UID 26, but UID 26 is not mapped into the container and is not the same UID that Postgres runs with while in the container.
- ...8 more annotations...
-
Podman launches a container inside of the user namespace, which is mapped with the range of UIDs defined for the user in /etc/subuid and /etc/subgid
-
The easy solution to this problem is to chown the html directory to match the UID that Postgresql runs with inside of the container.
-
use the podman unshare command, which drops you into the same user namespace that rootless Podman uses
-
This setup also means that the processes inside of the container are running as the user’s UID. If the container process escaped the container, the process would have full access to files in your home directory based on UID separation.
-
If you run the processes within the container as a different non-root UID, however, then those processes will run as that UID. If they escape the container, they would only have world access to content in your home directory.
-
run a podman unshare command, or set up the directories' group ownership as owned by your UID (root inside of the container).