Skip to main content

Home/ Larvata/ Group items tagged blog

Rss Feed Group items tagged

張 旭

Embracing REST with mind, body and soul « Plataformatec Blog - 0 views

  • gain with respond_with introduction is more obvious if you compare index, new and show actions
    • 張 旭
       
      看起來 respond_with 會根據 request 型態自動回覆對應型態的 response
  • you can define supported formats at the class level and tell in the instance the resource to be represented by those formats.
  • when a request comes, for example with format xml, it will first search for a template at users/index.xml. If the template is not available, it tries to render the resource given (in this case, @users) by calling :to_xml on it
  • ...6 more annotations...
  • how to render our resources depending on the format AND HTTP verb
  • By default, ActionController::Responder holds all formats behavior in a method called to_format.
  • Suddenly we realized that respond_with is useful just for GET requests
  • it renders the resource based on the HTTP verb and whether it has errors or not
  • Your controller code just have to send the resource using respond_with(@resource) and respond_with will call ActionController::Responder which will know what to do.
    • 張 旭
       
      簡單說,就是只要寫 respond_with 就好了,其它都不用管了。Responder 會幫你判斷 HTTP 的動作。
  • Anything that responds to :call can be a responder, so you can create your custom classes or even give procs, fibers and so on.
crazylion lee

Five Open-Source Slack Alternatives » okTurtles Blog - 0 views

  •  
    " Slack is a popular team communications application for organizations that offers group chat and direct messaging for mobile, web, and desktop platforms. While Slack offers many benefits to customers, there are also downsides to using the platform, including high subscription fees and the risk of a massive leak of private data if Slack's servers are ever breached (again)."
crazylion lee

SSH: Best practices at Aris' Blog - Computers, ssh and rock'n roll - 0 views

  •  
    "SSH: Best practices"
張 旭

Why it took a long time to build that tiny link preview on Wikipedia - Wikimedia Blog - 0 views

  • taken a few years for us to get this out to everyone
  •  
    "taken a few years for us to get this out to everyone"
張 旭

Setup ProxySQL for High Availability (not a Single Point of Failure) - Percona Database... - 0 views

  • ProxySQL doesn’t natively support any high availability solution
  • most common solution is setting up ProxySQL as part of a tile architecture, where Application/ProxySQL are deployed together.
    • 張 旭
       
      直接把 ProxySQL 跟 App 捆綁發佈
  • If we have 400 instances of ProxySQL, we end up keeping our databases busy just performing the checks.
  • ...5 more annotations...
  • Another possible approach is to have two layers of ProxySQL, one close to the application and another in the middle to connect to the database.
  • creates additional complexity in the management of the platform, and it adds network hops.
  • combining existing solutions and existing blocks: KeepAlived + ProxySQl + MySQL.
  • Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pool according to their health.
  • Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions.
crazylion lee

Idle Time » Blog Archive » Colorized man pages: Understood and customized - 0 views

  •  
    "Colorized man pages: Understood and customized"
張 旭

ProxySQL Experimental Feature: Native ProxySQL Clustering - Percona Database Performanc... - 0 views

  • several ProxySQL instances to communicate with and share configuration updates with each other.
  • 4 tables where you can make changes and propagate the configuration
  • When you make a change like INSERT/DELETE/UPDATE on any of these tables, after running the command LOAD … TO RUNTIME , ProxySQL creates a new checksum of the table’s data and increments the version number in the table runtime_checksums_values
  • ...2 more annotations...
  • all nodes are monitoring and communicating with all the other ProxySQL nodes. When another node detects a change in the checksum and version (both at the same time), each node will get a copy of the table that was modified, make the same changes locally, and apply the new config to RUNTIME to refresh the new config, make it visible to the applications connected and automatically save it to DISK for persistence.
  • a “synchronous cluster” so any changes to these 4 tables on any ProxySQL server will be replicated to all other ProxySQL nodes.
張 旭

Ruby and AOP: Decouple your code even more - Arkency Blog - 0 views

  • Dark Parts in our apps - persistence, networking, logging, notifications… these parts are scattered in our code
  • aspect-oriented programming!
  • components are parts we can easily encapsulate into some kind of code abstraction - a methods, objects or procedures.
  • ...16 more annotations...
  • application’s logic is a great example of a component
  • Aspects cross-cut our application - when we use some kind of persistence (e.g. a database) or network communication (such as ZMQ sockets) our components need to know about it.
  • Aspect-oriented programming aims to get rid of cross-cuts by separating aspect code from component code using injections of our aspects in certain join points in our component code.
  • It’s responsible for pushing snippets scenario
  • SRP-conformant object
  • the join points in Ruby
  • advice
    • 張 旭
       
      AOP 裡面的術語
  • In most cases after and before advice are sufficient.
  • what does it mean to “evaluate code around” something? In our case it means: Don’t run this method. Take it and push to my advice as an argument and evaluate this advice
  • to provide a join point
  • You’ll often see empty methods in code written in AOP paradigm
  • provide aspect code to link with our use case
  • use case is a pure domain object, without even knowing it’s connected with some kind of persistence and logging layer.
  • Aspect-oriented programming is fixing the problem with polluting pure logic objects with technical context of our applications.
  • we treat our glues as a configuration part, not the logic part of our apps.
  • Glues should not contain any logic at all
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

What exactly was the point of [ "x$var" = "xval" ]? - Vidar's Blog - 0 views

  • x-hack
  • test "x$arg" = "x-f"
  • the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.
  • ...3 more annotations...
  • The x-hack is effective because no unary operators can start with x.
  • the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!
  • The Dash issue of [ "(" = ")" ] was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today
  •  
    "x$var"
張 旭

Best practices for building Kubernetes Operators and stateful apps | Google Cloud Blog - 0 views

  • use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart.
  • a way to extend Kubernetes functionality with application specific logic using custom resources and custom controllers.
  • An Operator can automate various features of an application, but it should be specific to a single application
  • ...12 more annotations...
  • Kubebuilder is a comprehensive development kit for building and publishing Kubernetes APIs and Controllers using CRDs
  • Design declarative APIs for operators, not imperative APIs. This aligns well with Kubernetes APIs that are declarative in nature.
  • With declarative APIs, users only need to express their desired cluster state, while letting the operator perform all necessary steps to achieve it.
  • scaling, backup, restore, and monitoring. An operator should be made up of multiple controllers that specifically handle each of the those features.
  • the operator can have a main controller to spawn and manage application instances, a backup controller to handle backup operations, and a restore controller to handle restore operations.
  • each controller should correspond to a specific CRD so that the domain of each controller's responsibility is clear.
  • If you keep a log for every container, you will likely end up with unmanageable amount of logs.
  • integrate application-specific details to the log messages such as adding a prefix for the application name.
  • you may have to use external logging tools such as Google Stackdriver, Elasticsearch, Fluentd, or Kibana to perform the aggregations.
  • adding labels to metrics to facilitate aggregation and analysis by monitoring systems.
  • a more viable option is for application pods to expose a metrics HTTP endpoint for monitoring tools to scrape.
  • A good way to achieve this is to use open-source application-specific exporters for exposing Prometheus-style metrics.
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
張 旭

Introducing CNAME Flattening: RFC-Compliant CNAMEs at a Domain's Root - 0 views

  • you can now safely use a CNAME record, as opposed to an A record that points to a fixed IP address, as your root record in CloudFlare DNS without triggering a number of edge case error conditions because you’re violating the DNS spec.
  • CNAME Flattening allowed us to use a root domain while still maintaining DNS fault-tolerance across multiple IP addresses.
  • Traditionally, the root record of a domain needed to point to an IP address (known as an A -- for "address" -- Record).
  • ...13 more annotations...
  • WordPlumblr allows its users to use custom domains that point to the WordPlumblr infrastructure
  • A CNAME is an alias. It allows one domain to point to another domain which, eventually if you follow the CNAME chain, will resolve to an A record and IP address.
  • For example, WordPlumblr might have assigned the CNAME 6equj5.wordplumblr.com for Foo.com. Foo.com and the other customers may have all initially resolved, at the end of the CNAME chain, to the same IP address.
  • you usually don't want to address memory directly but, instead, you set up a pointer to a block of memory where you're going to store something. If the operating system needs to move the memory around then it just updates the pointer to point to wherever the chunk of memory has been moved to.
  • CNAMEs work great for subdomains like www.foo.com or blog.foo.com. Unfortunately, they don't work for a naked domain like foo.com itself.
  • the DNS spec enshrined that the root record -- the naked domain without any subdomain -- could not be a CNAME.
  • Technically, the root could be a CNAME but the RFCs state that once a record has a CNAME it can't have any other entries associated with it
  • a way to support a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record.
  • extended our authoritative DNS infrastructure to, in certain cases, act as a kind of DNS resolver.
  • if there's a CNAME at the root, rather than returning that record directly we recurse through the CNAME chain ourselves until we find an A Record.
  • allows the flexibility of having CNAMEs at the root without breaking the DNS specification.
  • We cache the CNAME responses -- respecting the DNS TTLs, just like a recursor should -- which means often we have the answer without having to traverse the chain.
  • CNAME flattening solved email resolution errors for us which was very key.
張 旭

Docker can now run within Docker - Docker Blog - 0 views

  • Docker 0.6 is the new “privileged” mode for containers. It allows you to run some containers with (almost) all the capabilities of their host machine, regarding kernel features and device access.
  • Among the (many!) possibilities of the “privileged” mode, you can now run Docker within Docker itself.
  • in the new privileged mode.
  • ...8 more annotations...
  • that /var/lib/docker should be a volume. This is important, because the filesystem of a container is an AUFS mountpoint, composed of multiple branches; and those branches have to be “normal” filesystems (i.e. not AUFS mountpoints).
  • /var/lib/docker, the place where Docker stores its containers, cannot be an AUFS filesystem.
  • we use them as a pass-through to the “normal” filesystem of the host machine.
  • The /var/lib/docker directory of the nested Docker will live somewhere in /var/lib/docker/volumes on the host system.
  • since the private Docker instances run in privileged mode, they can easily escalate to the host, and you probably don’t want this! If you really want to run something like this and expose it to the public, you will have to fine-tune the LXC template file, to restrict the capabilities and devices available to the Docker instances.
  • When you are inside a privileged container, you can always nest one more level
  • the LXC tools cannot start nested containers if the devices control group is not in its own hierarchy.
  • if you use AppArmor, you need a special policy to support nested containers.
‹ Previous 21 - 40 of 179 Next › Last »
Showing 20 items per page