Skip to main content

Home/ Larvata/ Contents contributed and discussions participated by 張 旭

Contents contributed and discussions participated by 張 旭

張 旭

How to Set Up Squid Proxy for Private Connections on Ubuntu 20.04 | DigitalOcean - 0 views

  • it’s a good idea to keep the deny all rule at the bottom of this configuration block.
  •  
    "it's a good idea to keep the deny all rule at the bottom of this configuration block."
張 旭

Using Traefik as a reverse proxy | Blog Eleven Labs - 0 views

  • a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a ‘front-end’ to control and protect access to a server on a private network.
  • the restart: always instruction will allow our reverse-proxy service to restart automatically, on its own.
  • add an [api] section to enable the dashboard and the API
  • ...6 more annotations...
  • double the $ symbols in order to escape the $ symbols as it tries to reference a variable.
  • stay consistent with names inside routers and middlewares.
  • providers.file
  • the service name is always in the form of [service name]@[provider
  • write different routing rules for a service and how to generate SSL certificates
  • traefik.http.services.home.loadbalancer.server.port=8123 indicates that the service port I want to expose is 8123.
  •  
    "a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a 'front-end' to control and protect access to a server on a private network."
張 旭

The Squeaky Blog | Why we don't use a staging environment - 0 views

  • Pre-live environments are never at parity with production
  • multiple people use staging to validate their changes before release.
  • Branches are then constantly out of sync with each other, and problems often surface when you merge, rebase, and backfill hotfixes.
  • ...10 more annotations...
  • Big Bang releases
  • there is a lengthy suite of tests and checks that run before it is deployed to staging. During this period, which could end up being hours, engineers will likely pick up another task. I’ve seen people merge, and then forget that their changes are on staging, more times than I can count.
  • only merge code that is ready to go live
  • written sufficient tests and have validated our changes in development.
  • All branches are cut from main, and all changes get merged back into main.
  • If we ever have an issue in production, we always roll forward.
  • Feature flags can be enabled on a per-user basis so we can monitor performance and gather feedback
  • Experimental features can be enabled by users in their account settings.
  • we have monitoring, logging, and alarms around all of our services. We also blue/green deploy, by draining and replacing a percentage of containers.
  • Dropping your staging environment in favour of true continuous integration and deployment can create a different mindset for shipping software.
  •  
    "Pre-live environments are never at parity with production "
張 旭

Kubernetes YAML Generator - 0 views

shared by 張 旭 on 24 Mar 22 - No Cached
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
張 旭

Introducing CNAME Flattening: RFC-Compliant CNAMEs at a Domain's Root - 0 views

  • you can now safely use a CNAME record, as opposed to an A record that points to a fixed IP address, as your root record in CloudFlare DNS without triggering a number of edge case error conditions because you’re violating the DNS spec.
  • CNAME Flattening allowed us to use a root domain while still maintaining DNS fault-tolerance across multiple IP addresses.
  • Traditionally, the root record of a domain needed to point to an IP address (known as an A -- for "address" -- Record).
  • ...13 more annotations...
  • WordPlumblr allows its users to use custom domains that point to the WordPlumblr infrastructure
  • A CNAME is an alias. It allows one domain to point to another domain which, eventually if you follow the CNAME chain, will resolve to an A record and IP address.
  • For example, WordPlumblr might have assigned the CNAME 6equj5.wordplumblr.com for Foo.com. Foo.com and the other customers may have all initially resolved, at the end of the CNAME chain, to the same IP address.
  • you usually don't want to address memory directly but, instead, you set up a pointer to a block of memory where you're going to store something. If the operating system needs to move the memory around then it just updates the pointer to point to wherever the chunk of memory has been moved to.
  • CNAMEs work great for subdomains like www.foo.com or blog.foo.com. Unfortunately, they don't work for a naked domain like foo.com itself.
  • the DNS spec enshrined that the root record -- the naked domain without any subdomain -- could not be a CNAME.
  • Technically, the root could be a CNAME but the RFCs state that once a record has a CNAME it can't have any other entries associated with it
  • a way to support a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record.
  • extended our authoritative DNS infrastructure to, in certain cases, act as a kind of DNS resolver.
  • if there's a CNAME at the root, rather than returning that record directly we recurse through the CNAME chain ourselves until we find an A Record.
  • allows the flexibility of having CNAMEs at the root without breaking the DNS specification.
  • We cache the CNAME responses -- respecting the DNS TTLs, just like a recursor should -- which means often we have the answer without having to traverse the chain.
  • CNAME flattening solved email resolution errors for us which was very key.
張 旭

Using cache in GitLab CI with Docker-in-Docker | $AYMDEV() - 0 views

  • optimize our images.
  • When you build an image, it is made of multiple layers: we add a layer per instruction.
  • If we build the same image again without modifying any file, Docker will use existing layers rather than re-executing the instructions.
  • ...21 more annotations...
  • an image is made of multiple layers, and we can accelerate its build by using layers cache from the previous image version.
  • by using Docker-in-Docker, we get a fresh Docker instance per job which local registry is empty.
  • docker build --cache-from "$CI_REGISTRY_IMAGE:latest" -t "$CI_REGISTRY_IMAGE:new-tag"
  • But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
  • script: - export PREVIOUS_VERSION=$(perl -lne 'print "v${1}" if /^##\s\[(\d\.\d\.\d)\]\s-\s\d{4}(?:-\d{2}){2}\s*$/' CHANGELOG.md | sed -n '2 p') - docker build --cache-from "$CI_REGISTRY_IMAGE:$PREVIOUS_VERSION" -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG" -f ./prod.Dockerfile .
  • « Docker layer caching » is enough to optimize the build time.
  • Cache in CI/CD is about saving directories or files across pipelines.
  • We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
  • Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
  • We set the directories to cache in the job settings with a key to share the cache per branch and stage.
  • - docker cp app:/var/www/html/vendor/ ./vendor
  • after_script
  • - docker cp app:/var/www/html/node_modules/ ./node_modules
  • To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
  • If you need to cache directories in testing jobs, it's easier: use volumes !
  • version your cache keys !
  • sharing Docker image between jobs
  • In every job, we automatically get artifacts from previous stages.
  • docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
  • I personally use the « push / pull » technique,
  • we docker push after the build, then we docker pull if needed in the next jobs.
張 旭

Docker can now run within Docker - Docker Blog - 0 views

  • Docker 0.6 is the new “privileged” mode for containers. It allows you to run some containers with (almost) all the capabilities of their host machine, regarding kernel features and device access.
  • Among the (many!) possibilities of the “privileged” mode, you can now run Docker within Docker itself.
  • in the new privileged mode.
  • ...8 more annotations...
  • that /var/lib/docker should be a volume. This is important, because the filesystem of a container is an AUFS mountpoint, composed of multiple branches; and those branches have to be “normal” filesystems (i.e. not AUFS mountpoints).
  • /var/lib/docker, the place where Docker stores its containers, cannot be an AUFS filesystem.
  • we use them as a pass-through to the “normal” filesystem of the host machine.
  • The /var/lib/docker directory of the nested Docker will live somewhere in /var/lib/docker/volumes on the host system.
  • since the private Docker instances run in privileged mode, they can easily escalate to the host, and you probably don’t want this! If you really want to run something like this and expose it to the public, you will have to fine-tune the LXC template file, to restrict the capabilities and devices available to the Docker instances.
  • When you are inside a privileged container, you can always nest one more level
  • the LXC tools cannot start nested containers if the devices control group is not in its own hierarchy.
  • if you use AppArmor, you need a special policy to support nested containers.
張 旭

Docker image building on GitLab CI | $AYMDEV() - 0 views

  • Continuous Integration (or CI) is a practice where you continously test an application to detect errors as soon as possible.
  • Docker is a container technology, many CI tools execute jobs (the tasks of a pipeline) in container to have an isolated environment.
  • Docker in Docker (« DinD » in short) means executing Docker in a Docker container.
  • ...11 more annotations...
  • images are saved in the host registry, we can benefit from Docker layer caching
  • All jobs will share the same environment, if many of them run simultaneously they might get into conflicts.
  • storage management (accumulating images)
  • The Docker socket binding technique means making a volume of /var/run/docker.sock between host and containers.
  • all containers would share the same Docker daemon.
  • Add privileged = true in the [runners.docker] section, the privileged mode is mandatory to use DinD.
  • To avoid that the runner only run one job at a time, change the concurrent value on the first line.
  • To avoid building a Docker image at each job, it can be built in a first job, pushed to the image registry provided by GitLab, and pulled in the next jobs.
  • functional tests depending on a database.
  • Docker Compose allows you to easily start multiple containers, but it has no more feature than Docker itself
  • Docker in Docker works well, but has its drawbacks, like Docker layer caching which needs some more commands to be used.
張 旭

Optimizing Gitlab pipelines - Basics (1) | PrinsFrank.nl - 0 views

  • When you use specific docker image, make sure you have the Dependency Proxy enabled so the image doesn’t have to be downloaded again for every job.
  • stages are used to group items that can run at the same time.
  • Instead of waiting for all jobs to finish, you can mark jobs as interruptible which signals a job to cancel when a new pipeline starts for the same branch
  • ...8 more annotations...
  • mark all jobs as interruptible as it doesn’t make sense to wait for builds and tests based on old information.
  • Deployment jobs are the main exception as they should probably finish.
  • only running it when specific files have changed
  • To prevent the ‘vendor’ and ‘node_modules’ folder from being regenerated in every job, we can configure a build job for composer and npm assets.
  • To share assets between multiple stages, Gitlab has caches and artifacts. For dependencies we should use caches.
  • The pull-push policy is the default, but specified here for clarity.
  • All consecutive runs for the build step with the same ‘composer.lock’ file don’t update the cache.
  • composer prevents this by caching packages in a global package cache,
張 旭

3. Hello, world! - Err 9.9.9 documentation - 0 views

  • The first is a Message object, which represents the full message object received by Errbot.
  • The second is a string (or a list, if using the split_args_with parameter of botcmd()) with the arguments passed to the command.
  • args would be the string “Mister Errbot”.
  • ...6 more annotations...
  • If you return None, Errbot will not respond with any kind of message when executing the command.
  • a file that ends with the extension .plug and it is used by Errbot to identify and load plugins.
  • The key Module should point to a module that Python can find and import.
  • The key Name should be identical to the name you gave to the class in your plugin file
  • The presence of __init__.py indicates lib is a Python regular package.
  • the !status command,
張 旭

What ChatOps Solutions Should You Use Today? | PäksTech - 0 views

shared by 張 旭 on 16 Feb 22 - No Cached
  • The big elephant in the room is of course Hubot, which now hasn’t seen new commits in over three years.
  • Botkit bots are written in JavaScript and they run on Node.js
  • Errbot is a chatbot written in Python, it comes with a ton of features, and it is extendable with custom plugins.
  • ...8 more annotations...
  • by default they react to !commands in your chatroom. Commands can also trigger on regular expression matches, with or without a bot prefix.
  • Errbot also supports Markdown responses with Jinja2 templating.
  • Errbot supports webhooks; It has a small web server that can translate endpoints to your custom plugins.
  • It’s recommended that you configure this behind a web server such as nginx or Apache.
  • It works with the If This Then That (IFTTT) principle, meaning that you define a set of rules that the system then uses to take action.
  • Lita is a chat bot written in Ruby. Like the other bots I’ve mentioned, it is also open source and supports different chat platforms via plugins.
  • Gort is a newer entrant to the ChatOps space. As the name suggests it has been written in Go, and it is still under active development.
  • can persist information in databases, supports advanced parsers, and is extendable with custom skills.
張 旭

Language Server Protocol - Wikipedia - 0 views

  • Modern IDEs provide developers with sophisticated features like code completion, refactoring, navigating to a symbol's definition, syntax highlighting, and error and warning markers.
  • an IDE needs a sophisticated understanding of the programming language that the program's source is written in.
  • Conventional compilers or interpreters for a specific programming language are typically unable to provide these language services, because they are written with the goal of either transforming the source code into object code or immediately executing the code.
  • ...5 more annotations...
  • Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor.
  • The Language Server Protocol allows for decoupling language services from the editor so that the services may be contained within a general purpose language server.
  • LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications[7] or domain-specific languages (DSL).
  • When a user edits one or more source code files using a language server protocol-enabled tool, the tool acts as a client that consumes the language services provided by a language server.
  • The protocol does not make any provisions about how requests, responses and notifications are transferred between client and server.
張 旭

APP_KEY And You | Tighten - 0 views

  • The application key is a random, 32-character string stored in the APP_KEY key in your .env file.
  • Once your app is running, there's one place it uses the APP_KEY: cookies.
  • Laravel uses the key for all encrypted cookies, including the session cookie, before handing them off to the user's browser, and it uses it to decrypt cookies read from the browser.
  • ...16 more annotations...
  • Encrypted cookies are an important security feature in Laravel.
  • All of this encryption and decryption is handled in Laravel by the Encrypter using PHP's built-in security tools, including OpenSSL.
  • Passwords are not encrypted, they are hashed.
  • Laravel's passwords are hashed using Hash::make() or bcrypt(), neither of which use APP_KEY.
  • Crypt (symmetric encryption) and Hash (one-way cryptographic hashing).
  • Laravel uses this same method for cookies, both the sender and receiver, using APP_KEY as the encryption key.
  • something like user passwords, you should never have a way to decrypt them. Ever.
  • Unique: The collision rate (different inputs hashing to the same output) should be very small
  • Laravel hashing implements the native PHP password_hash() function, defaulting to a hashing algorithm called bcrypt.
  • a one-way hash, we cannot decrypt it. All that we can do is test against it.
  • When the user with this password attempts to log in, Laravel hashes their password input and uses PHP’s password_verify() function to compare the new hash with the database hash
  • User password storage should never be reversible, and therefore doesn’t need APP_KEY at all.
  • Any good credential management strategy should include rotation: changing keys and passwords on a regular basis
  • update the key on each server.
  • their sessions invalidated as soon as you change your APP_KEY.
  • make and test a plan to decrypt that data with your old key and re-encrypt it with the new key.
張 旭

Tagging AWS resources - AWS General Reference - 0 views

  • assign metadata to your AWS resources in the form of tags.
  • a user-defined key and value
  • Tag keys are case sensitive.
  • ...17 more annotations...
  • tag values are case sensitive.
  • Tags are accessible to many AWS services, including billing.
  • personally identifiable information (PII)
  • apply it consistently across all resource types.
  • Use automated tools to help manage resource tags.
  • Use too many tags rather than too few tags.
  • Tag policies let you specify tagging rules that define valid key names and the values that are valid for each key.
  • Name – Identify individual resources
  • Environment – Distinguish between development, test, and production resources
  • Project – Identify projects that the resource supports
  • Owner – Identify who is responsible for the resource
  • Each resource can have a maximum of 50 user created tags.
  • For each resource, each tag key must be unique, and each tag key can have only one value.
  • Tag keys and values are case sensitive.
  • decide on a strategy for capitalizing tags, and consistently implement that strategy across all resource types.
  • AWS Cost Explorer and detailed billing reports let you break down AWS costs by tag.
  • An effective tagging strategy uses standardized tags and applies them consistently and programmatically across AWS resources.
  •  
    "assign metadata to your AWS resources in the form of tags."
張 旭

Run your CI/CD jobs in Docker containers | GitLab - 0 views

  • If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server.
  • Run other services, like MySQL, in containers. Do this by specifying services in your .gitlab-ci.yml file.
  • By default, the executor pulls images from Docker Hub
  • ...10 more annotations...
  • Maps must contain at least the name option, which is the same image name as used for the string setting.
  • When a CI job runs in a Docker container, the before_script, script, and after_script commands run in the /builds/<project-path>/ directory. Your image may have a different default WORKDIR defined. To move to your WORKDIR, save the WORKDIR as an environment variable so you can reference it in the container during the job’s runtime.
  • The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the .gitlab-ci.yml file.
  • attaches itself to a running container.
  • sends the script to the container’s shell stdin and receives the output.
  • To override the entrypoint of a Docker image, define an empty entrypoint in the .gitlab-ci.yml file, so the runner does not start a useless shell layer. However, that does not work for all Docker versions. For Docker 17.06 and later, the entrypoint can be set to an empty value. For Docker 17.03 and earlier, the entrypoint can be set to /bin/sh -c, /bin/bash -c, or an equivalent shell available in the image.
  • The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
  • entrypoint: [""]
  • entrypoint: ["/bin/sh", "-c"]
  • A DOCKER_AUTH_CONFIG CI/CD variable
  •  
    "If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server. "
« First ‹ Previous 41 - 60 of 596 Next › Last »
Showing 20 items per page