Skip to main content

Home/ Larvata/ Group items tagged work

Rss Feed Group items tagged

張 旭

Overriding Auto Devops - 0 views

  • most customers need to modify the devops pipeline to suit there needs
  • include Auto Devops and override it.
  • include all of Auto Devops, just as if the Auto Devops checkbox were checked for the project
  • ...4 more annotations...
  • skips for all the scans, as a way of speeding up the build process while working on the CI configuration
  • The Auto Devops test job, which uses Herokuish for testing, does not rely on the Docker image that’s generated during the Build job
  • moving the Test job to the Build stage to speed things along
  • Literally any part of Auto Devops can be overridden in your own CI configuration.
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
張 旭

Helm | Getting Started - 0 views

  • The templates/ directory is for template files. When Helm evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. It then collects the results of those templates and sends them on to Kubernetes.
  • The charts/ directory may contain other charts (which we call subcharts).
  • we recommend using the suffix .yaml for YAML files and .tpl for helpers.
  • ...8 more annotations...
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server.
  • Each file begins with --- to indicate the start of a YAML document, and then is followed by an automatically generated comment line that tells us what template file generated this YAML document.
  • name: field is limited to 63 characters because of limitations to the DNS system.
  • The template directive {{ .Release.Name }} injects the release name into the template. The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • helm install --debug --dry-run goodly-guppy ./mychart. This will render the templates. But instead of installing the chart, it will return the rendered template to you
  • Using --dry-run will make it easier to test your code, but it won't ensure that Kubernetes itself will accept the templates you generate.
  • It's best not to assume that your chart will install just because --dry-run works.
張 旭

Helm | Variables - 0 views

shared by 張 旭 on 03 Oct 21 - No Cached
  • there is one variable that is always global - $ - this variable will always point to the root context.
  • # Many helm templates would use `.` below, but that will not work
  • {{- range
  •  
    "there is one variable that is always global - $ - this variable will always point to the root context. "
張 旭

Run your CI/CD jobs in Docker containers | GitLab - 0 views

  • If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server.
  • Run other services, like MySQL, in containers. Do this by specifying services in your .gitlab-ci.yml file.
  • By default, the executor pulls images from Docker Hub
  • ...10 more annotations...
  • Maps must contain at least the name option, which is the same image name as used for the string setting.
  • When a CI job runs in a Docker container, the before_script, script, and after_script commands run in the /builds/<project-path>/ directory. Your image may have a different default WORKDIR defined. To move to your WORKDIR, save the WORKDIR as an environment variable so you can reference it in the container during the job’s runtime.
  • The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the .gitlab-ci.yml file.
  • attaches itself to a running container.
  • sends the script to the container’s shell stdin and receives the output.
  • To override the entrypoint of a Docker image, define an empty entrypoint in the .gitlab-ci.yml file, so the runner does not start a useless shell layer. However, that does not work for all Docker versions. For Docker 17.06 and later, the entrypoint can be set to an empty value. For Docker 17.03 and earlier, the entrypoint can be set to /bin/sh -c, /bin/bash -c, or an equivalent shell available in the image.
  • The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
  • entrypoint: [""]
  • entrypoint: ["/bin/sh", "-c"]
  • A DOCKER_AUTH_CONFIG CI/CD variable
  •  
    "If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server. "
張 旭

phusion/passenger-docker: Docker base images for Ruby, Python, Node.js and Meteor web apps - 0 views

  • Ubuntu 20.04 LTS as base system
  • 2.7.5 is configured as the default.
  • Python 3.8
  • ...23 more annotations...
  • A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
  • Nginx 1.18. Disabled by default
  • production-grade features, such as process monitoring, administration and status inspection.
  • Redis 5.0. Not installed by default.
  • The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
  • running applications without root privileges is good security practice.
  • Your application should be placed inside /home/app.
  • COPY --chown=app:app
  • Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
  • placing a .conf file in the directory /etc/nginx/sites-enabled
  • The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
  • files in conf.d are included in the Nginx configuration's http context.
  • any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
  • To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
  • By default, Phusion Passenger sets all of the following environment variables to the value production
  • Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
  • PASSENGER_APP_ENV environment variable
  • passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
  • The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
  • You can add additional daemons to the image by creating runit entries.
  • The shell script must be called run, must be executable
  • the shell script must run the daemon without letting it daemonize/fork it.
  • We use RVM to install and to manage Ruby interpreters.
張 旭

Docker image building on GitLab CI | $AYMDEV() - 0 views

  • Continuous Integration (or CI) is a practice where you continously test an application to detect errors as soon as possible.
  • Docker is a container technology, many CI tools execute jobs (the tasks of a pipeline) in container to have an isolated environment.
  • Docker in Docker (« DinD » in short) means executing Docker in a Docker container.
  • ...11 more annotations...
  • images are saved in the host registry, we can benefit from Docker layer caching
  • All jobs will share the same environment, if many of them run simultaneously they might get into conflicts.
  • storage management (accumulating images)
  • The Docker socket binding technique means making a volume of /var/run/docker.sock between host and containers.
  • all containers would share the same Docker daemon.
  • Add privileged = true in the [runners.docker] section, the privileged mode is mandatory to use DinD.
  • To avoid that the runner only run one job at a time, change the concurrent value on the first line.
  • To avoid building a Docker image at each job, it can be built in a first job, pushed to the image registry provided by GitLab, and pulled in the next jobs.
  • functional tests depending on a database.
  • Docker Compose allows you to easily start multiple containers, but it has no more feature than Docker itself
  • Docker in Docker works well, but has its drawbacks, like Docker layer caching which needs some more commands to be used.
張 旭

Why I Will Never Use Alpine Linux Ever Again | Martin Heinz | Personal Website & Blog - 2 views

  • musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu.
  • Some of it stems from how musl (and therefore also Alpine) handles DNS (it's always DNS), more specifically, musl (by design) doesn't support DNS-over-TCP.
  • By using Alpine, you're getting "free" chaos engineering for you cluster.
  • ...2 more annotations...
  • this DNS issue does not manifest in Docker container. It can only happen in Kubernetes, so if you test locally, everything will work fine, and you will only find out about unfixable issue when you deploy the application to a cluster.
  • if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.
  •  
    "musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu."
張 旭

Installation Guide - NGINX Ingress Controller - 0 views

  • On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration.
張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

Auto DevOps | GitLab - 0 views

  • Scan for vulnerabilities and security flaws.
  • Auto DevOps starts by building and testing your application.
  • preview your changes in a per-branch basis.
  • ...9 more annotations...
  • you don’t need to set up the deployment upfront. Auto DevOps still builds and tests your application. You can define the deployment later.
  • ship your app first, then explore the customizations later.
  • Consistency
  • Auto DevOps works with any Kubernetes cluster.
  • To use Auto DevOps for individual projects, you can enable it in a project-by-project basis.
  • Only project Maintainers can enable or disable Auto DevOps at the project level.
  • We strongly advise you to use GitLab Container Registry with Auto DevOps to simplify configuration and prevent any unforeseen issues.
  • The GitLab integration with Helm does not support installing applications when behind a proxy.
    • 張 旭
       
      已經廢棄了,不要用
    • 張 旭
       
      已經廢棄了,不要用
張 旭

Moving away from Alpine - DEV Community - 0 views

  • it’s a lot of work to get packages that are not readily available in Alpine repository.
  • things compiled in Alpine won’t be usable on Ubuntu, for example, and vice versa.
  • the difficulty in pinning package versions in Alpine.
  • ...2 more annotations...
  • Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
  • Ubuntu officially launched minimal ubuntu images for cloud / container use
張 旭

What ChatOps Solutions Should You Use Today? | PäksTech - 0 views

shared by 張 旭 on 16 Feb 22 - No Cached
  • The big elephant in the room is of course Hubot, which now hasn’t seen new commits in over three years.
  • Botkit bots are written in JavaScript and they run on Node.js
  • Errbot is a chatbot written in Python, it comes with a ton of features, and it is extendable with custom plugins.
  • ...8 more annotations...
  • by default they react to !commands in your chatroom. Commands can also trigger on regular expression matches, with or without a bot prefix.
  • Errbot also supports Markdown responses with Jinja2 templating.
  • Errbot supports webhooks; It has a small web server that can translate endpoints to your custom plugins.
  • It’s recommended that you configure this behind a web server such as nginx or Apache.
  • It works with the If This Then That (IFTTT) principle, meaning that you define a set of rules that the system then uses to take action.
  • Lita is a chat bot written in Ruby. Like the other bots I’ve mentioned, it is also open source and supports different chat platforms via plugins.
  • Gort is a newer entrant to the ChatOps space. As the name suggests it has been written in Go, and it is still under active development.
  • can persist information in databases, supports advanced parsers, and is extendable with custom skills.
張 旭

architecture - Difference between a "coroutine" and a "thread"? - Stack Overflow - 0 views

  • Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using.
  • A thread on the other hand does not need to suspend its execution. Being suspended is completely transparent to the thread and the thread is forced by underlying hardware to suspend itself.
  • co-routines can not be concurrently executed and race conditions can not occur.
  • ...8 more annotations...
  • Concurrency is the separation of tasks to provide interleaved execution.
  • Parallelism is the simultaneous execution of multiple pieces of work in order to increase speed.
  • With threads, the operating system switches running threads preemptively according to its scheduler, which is an algorithm in the operating system kernel.
  • With coroutines, the programmer and programming language determine when to switch coroutines
  • In contrast to threads, which are pre-emptively scheduled by the operating system, coroutine switches are cooperative, meaning the programmer (and possibly the programming language and its runtime) controls when a switch will happen.
  • preemption
  • Coroutines are a form of sequential processing: only one is executing at any given time
  • Threads are (at least conceptually) a form of concurrent processing: multiple threads may be executing at any given time.
  •  
    "Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using."
張 旭

Java microservices architecture by example - 0 views

  • A microservices architecture is a particular case of a service-oriented architecture (SOA)
  • What sets microservices apart is the extent to which these modules are interconnected.
  • Every server comprises just one certain business process and never consists of several smaller servers.
  • ...12 more annotations...
  • Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
  • Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
  • A monolithic architecture keeps it all simple. An app has just one server and one database.
  • All the connections between units are inside-code calls.
  • split our application into microservices and got a set of units completely independent for deployment and maintenance.
  • Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
  • ensure seamless communication between newly created distributed components.
  • The gateway became an entry point for all clients’ requests.
  • We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
  • we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
  • We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
  • microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.
« First ‹ Previous 101 - 116 of 116
Showing 20 items per page