Skip to main content

Home/ Larvata/ Group items tagged machine

Rss Feed Group items tagged

張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
張 旭

Orbs, Jobs, Steps, and Workflows - CircleCI - 0 views

  • Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
  • Jobs are a collection of Steps.
  • All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
  • ...11 more annotations...
  • Workspaces persist data between jobs in a single Workflow.
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished.
  • run using the machine executor which enables reuse of recently used machine executor runs,
  • docker executor which can compose Docker containers to run your tests and any services they require
  • macos executor
  • Steps are a collection of executable commands which are run during a job
  • In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
  • checkout: key is required to checkout your code
  • run: enables addition of arbitrary, multi-line shell command scripting
  • orchestrating job runs with parallel, sequential, and manual approval workflows.
張 旭

Build an Image - Getting Started - Packer by HashiCorp - 0 views

  • The configuration file used to define what image we want built and how is called a template in Packer terminology.
  • JSON struck the best balance between human-editable and machine-editable, allowing both hand-made templates as well as machine generated templates to easily be made.
  • keeping your secret keys out of the template
  • ...3 more annotations...
  • validate the template by running packer validate example.json. This command checks the syntax as well as the configuration values to verify they look valid.
  • At the end of running packer build, Packer outputs the artifacts that were created as part of the build.
  • Packer only builds images. It does not attempt to manage them in any way.
張 旭

Glossary - CircleCI - 0 views

  • User authentication may use LDAP for an instance of the CircleCI application that is installed on your private server or cloud
  • The first user to log into a private installation of CircleCI
  • Contexts provide a mechanism for securing and sharing environment variables across projects.
  • ...22 more annotations...
  • The environment variables are defined as name/value pairs and are injected at runtime.
  • The CircleCI Docker Layer Caching feature allows builds to reuse Docker image layers
  • from previous builds.
  • Image layers are stored in separate volumes in the cloud and are not shared between projects.
  • Layers may only be used by builds from the same project.
  • Environment variables store customer data that is used by a project.
  • Defines the underlying technology to run a job.
  • machine to run your job inside a full virtual machine.
  • docker to run your job inside a Docker container with a specified image
  • A job is a collection of steps.
  • The first image listed in config.yml
  • A CircleCI project shares the name of the code repository for which it automates workflows, tests, and deployment.
  • must be added with the Add Project button
  • Following a project enables a user to subscribe to email notifications for the project build status and adds the project to their CircleCI dashboard.
  • A step is a collection of executable commands
  • Users must be added to a GitHub or Bitbucket org to view or follow associated CircleCI projects.
  • Users may not view project data that is stored in environment variables.  
  • A Workflow is a set of rules for defining a collection of jobs and their run order.
  • Workflows are implemented as a directed acyclic graph (DAG) of jobs for greatest flexibility.
  • referred to as Pipelines
  • A workspace is a workflows-aware storage mechanism.
  • A workspace stores data unique to the job, which may be needed in downstream jobs.
張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

FreeIPAv2:Dynamic updates with GSS-TSIG - FreeIPA - 0 views

  • This short tutorial will teach you how to setup your name server so that you can dynamically update the resource records with the help of FreeIPA.
  • tkey-gssapi-keytab
  • BIND version
    • 張 旭
       
      named -v
  • ...9 more annotations...
  • add the DNS service principal and acquire the keytab
  • kinit admin
  • All machines belonging to Kerberos realm EXAMPLE.COM are allowed to update own A record.
  • grant EXAMPLE.COM krb5-self * A;
  • Allow Kerberos principal SERVICE/ipaserver.example.com@EXAMPLE.COM to do any updates in whole zone.
  • Machine is allowed to update own PTR record in reverse zone.
  • kinit admin
  • with kinit. (This step is not required if the client was enrolled by ipa-client-install script or host keytab is already in place for other reasons.)
  • the "server dns.example.com" command tells nsupdate to update the specified DNS server
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.2.1.2 Configuring an Instance for Group Repli... - 0 views

  • store replication metadata in system tables instead of files
  • collect the write set and encode it as a hash using the XXHASH64 hashing algorithm
  • not start operations automatically when the server starts
  • ...10 more annotations...
  • for incoming connections from other members in the group
  • The server listens on this port for member-to-member connections. This port must not be used for user applications at all
  • The loose- prefix used for the group_replication variables above instructs the server to continue to start if the Group Replication plugin has not been loaded at the time the server is started.
  • For example, if each server instance is on a different machine use the IP and port of the machine, such as 10.0.0.1:33061. The recommended port for group_replication_local_address is 33061
  • does not need to list all members in the group
  • The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group
  • start the bootstrap member first, and let it create the group
  • Creating a group and joining multiple members at the same time is not supported.
  • must only be used on one server instance at any time
  • Disable this option after the first server instance comes online
張 旭

Docker can now run within Docker - Docker Blog - 0 views

  • Docker 0.6 is the new “privileged” mode for containers. It allows you to run some containers with (almost) all the capabilities of their host machine, regarding kernel features and device access.
  • Among the (many!) possibilities of the “privileged” mode, you can now run Docker within Docker itself.
  • in the new privileged mode.
  • ...8 more annotations...
  • that /var/lib/docker should be a volume. This is important, because the filesystem of a container is an AUFS mountpoint, composed of multiple branches; and those branches have to be “normal” filesystems (i.e. not AUFS mountpoints).
  • /var/lib/docker, the place where Docker stores its containers, cannot be an AUFS filesystem.
  • we use them as a pass-through to the “normal” filesystem of the host machine.
  • The /var/lib/docker directory of the nested Docker will live somewhere in /var/lib/docker/volumes on the host system.
  • since the private Docker instances run in privileged mode, they can easily escalate to the host, and you probably don’t want this! If you really want to run something like this and expose it to the public, you will have to fine-tune the LXC template file, to restrict the capabilities and devices available to the Docker instances.
  • When you are inside a privileged container, you can always nest one more level
  • the LXC tools cannot start nested containers if the devices control group is not in its own hierarchy.
  • if you use AppArmor, you need a special policy to support nested containers.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
crazylion lee

Artificial Neural Networks for Beginners » Loren on the Art of MATLAB - 0 views

  •  
    "Deep Learning is a very hot topic these days especially in computer vision applications and you probably see it in the news and get curious. Now the question is, how do you get started with it? Today's guest blogger, Toshi Takeuchi, gives us a quick tutorial on artificial neural networks as a starting point for your study of deep learning."
crazylion lee

WildML - AI, Deep Learning, NLP - 0 views

  •  
    "AI, DEEP LEARNING, NLP"
crazylion lee

Berkeley AI Materials - 0 views

  •  
    "AI -- Course Materials"
張 旭

Docker Explained: Using Dockerfiles to Automate Building of Images | DigitalOcean - 0 views

  • CMD would be running an application upon creation of a container which is already installed using RUN (e.g. RUN apt-get install …) inside the image
  • ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image.
  • ENV command is used to set the environment variables (one or more).
  • ...6 more annotations...
  • EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world
  • defines the base image to use to start the build process
  • Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed).
  • VOLUME command is used to enable access from your container to a directory on the host machine
  • set where the command defined with CMD is to be executed
  • To detach yourself from the container, use the escape sequence CTRL+P followed by CTRL+Q
crazylion lee

Microsoft/CNTK: Computational Network Toolkit (CNTK) - 1 views

  •  
    "Computational Network Toolkit (CNTK) "
張 旭

Virtual Private Cloud (VPC)  |  Virtual Private Cloud  |  Google Cloud - 0 views

  • A single Google Cloud VPC can span multiple regions without communicating across the public Internet.
  • Google Cloud VPCs let you increase the IP space of any subnets without any workload shutdown or downtime.
  • Get private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address.
  • ...3 more annotations...
  • Enable dynamic Border Gateway Protocol (BGP) route updates between your VPC network and your non-Google network with our virtual router.
  • Configure a VPC Network to be shared across several projects in your organization.
  • Hosting globally distributed multi-tier applications, by creating a VPC with subnets.
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

Setup ProxySQL for High Availability (not a Single Point of Failure) - Percona Database... - 0 views

  • ProxySQL doesn’t natively support any high availability solution
  • most common solution is setting up ProxySQL as part of a tile architecture, where Application/ProxySQL are deployed together.
    • 張 旭
       
      直接把 ProxySQL 跟 App 捆綁發佈
  • If we have 400 instances of ProxySQL, we end up keeping our databases busy just performing the checks.
  • ...5 more annotations...
  • Another possible approach is to have two layers of ProxySQL, one close to the application and another in the middle to connect to the database.
  • creates additional complexity in the management of the platform, and it adds network hops.
  • combining existing solutions and existing blocks: KeepAlived + ProxySQl + MySQL.
  • Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pool according to their health.
  • Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions.
« First ‹ Previous 41 - 60 of 79 Next ›
Showing 20 items per page