Skip to main content

Home/ Arquitectura?/ Group items tagged configuration

Rss Feed Group items tagged

Pablo Lalloni

typesafehub/config - 0 views

  •  
    "Configuration library for JVM languages. Overview implemented in plain Java with no dependencies extensive test coverage supports files in three formats: Java properties, JSON, and a human-friendly JSON superset merges multiple files across all formats can load from files, URLs, or classpath good support for "nesting" (treat any subtree of the config the same as the whole config) users can override the config with Java system properties, java -Dmyapp.foo.bar=10 supports configuring an app, with its framework and libraries, all from a single file such as application.conf parses duration and size settings, "512k" or "10 seconds" converts types, so if you ask for a boolean and the value is the string "yes", or you ask for a float and the value is an int, it will figure it out. JSON superset features: comments includes substitutions ("foo" : ${bar}, "foo" : Hello ${who}) properties-like notation (a.b=c) less noisy, more lenient syntax substitute environment variables This library limits itself to config files. If you want to load config from a database or something, you would need to write some custom code. The library has nice support for merging configurations so if you build one from a custom source it's easy to merge it in."
Pablo Lalloni

Running Secured Docker Registry 2.0 - Container Solutions - 0 views

  •  
    "The new Docker Registry 2.0 was released on April 16th, 2015. It was completely rewritten in Go with added support for the new Docker Registry HTTP API V2 (thus only working with Docker 1.6+), promising to provide faster and more secure distribution of images. If you work with Docker and for some reason decided not to use the public Docker Hub, a private Docker Registry is an essential part of your architecture. But even if you don't have private images, you will likely need to use your own registry in production/testing for efficiency. The default installation, however, runs without encryption and authentication. I was wondering what's involved in securing it. There is an official tutorial on how to configure TLS on a registry server. TLS/SSL is absolutely necessary for any secure setup, but I also wanted to enable an authentication mechanism. The Configuration Reference document describes two authentication options supported by Docker Registry itself: so-called silly and token solutions. The silly one is apparently only useful for very limited development use-cases. The token solution seems to be more serious, but because of the lack of documentation (at the time of writing), I decided to find an alternative approach to secure it. In this article I'm going to show you how to set up the Docker Registry 2.0 with username/password authentication and SSL using the official Docker Registry image and a custom configured nginx as a proxy server."
Pablo Lalloni

Introduction - Terraform - 2 views

  •  
    "Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. The key features of Terraform are: Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used. Execution Plans: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure. Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure. Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors."
munyeco

Why Vagrant? - Vagrant Documentation - 3 views

  •  
    Mientras tanto, en un lado de la brecha: Why Vagrant? Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.
Pablo Lalloni

Introduction - Terraform - 2 views

  •  
    "Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc."
Pablo Lalloni

michaelsauter/crane - 0 views

  •  
    "Crane is a tool to orchestrate Docker containers. It works by reading in some configuration (JSON or YAML) which describes how to obtain images and how to run containers. This simplifies setting up a development environment a lot as you don't have to bring up every container manually, remembering all the arguments you need to pass. By storing the configuration next to the data and the app(s) in a repository, you can easily share the whole environment."
Pablo Lalloni

New Relic, Docker Showcase the Coming Devops Disruption | Trinity Ventures - 0 views

  • In a pre-Docker world, companies with tremendous and evolving application demands looked to virtualization as a way of abstracting their infrastructure, but paid a tax in dollars and performance for doing so. In the future we think of Docker will take the mantle as the VMware of the devops world, with containers as the ultimate devops platform.
  •  
    "In 2010 we led the seed round for Docker (formerly known as dotCloud) for one simple reason: devops means that the way applications are packaged, deployed, and run is fundamentally changing (though Docker's business model has evolved since its early days as a PaaS vendor, the fundamental premise is the same).  Rather than requiring custom configurations and painstaking management, Docker "containerizes" applications components such that every container is lightweight and behaves consistently.  Applications and their underlying components can be programmatically deployed, managed and moved on ever-changing cloud infrastructure without a hint of operating system or hardware configuration.  In a pre-Docker world, companies with tremendous and evolving application demands looked to virtualization as a way of abstracting their infrastructure, but paid a tax in dollars and performance for doing so. In the future we think of Docker will take the mantle as the VMware of the devops world, with containers as the ultimate devops platform."
Pablo Lalloni

The BIRD Internet Routing Daemon Project - 1 views

  •  
    "Internet Routing: It's a program (well, a daemon, as you are going to discover in a moment) which works as a dynamic router in an Internet type network (that is, in a network running either the IPv4 or the IPv6 protocol). Routers are devices which forward packets between interconnected networks in order to allow hosts not connected directly to the same local area network to communicate with each other. They also communicate with the other routers in the Internet to discover the topology of the network which allows them to find optimal (in terms of some metric) rules for forwarding of packets (which are called routing tables) and to adapt themselves to the changing conditions such as outages of network links, building of new connections and so on. Most of these routers are costly dedicated devices running obscure firmware which is hard to configure and not open to any changes (on the other hand, their special hardware design allows them to keep up with lots of high-speed network interfaces, better than general-purpose computer does). Fortunately, most operating systems of the UNIX family allow an ordinary computer to act as a router and forward packets belonging to the other hosts, but only according to a statically configured table."
Pablo Lalloni

Consul Introduction - 1 views

  •  
    "Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features: Service Discovery: Clients of Consul can provide a service, such as api or mysql, and other clients can use Consul to discover providers of a given service. Using either DNS or HTTP, applications can easily find the services they depend upon. Health Checking: Consul clients can provide any number of health checks, either associated with a given service ("is the webserver returning 200 OK"), or with the local node ("is memory utilization below 90%"). This information can be used by an operator to monitor cluster health, and it is used by the service discovery components to route traffic away from unhealthy hosts. Key/Value Store: Applications can make use of Consul's hierarchical key/value store for any number of purposes including: dynamic configuration, feature flagging, coordination, leader election, etc. The simple HTTP API makes it easy to use. Multi Datacenter: Consul supports multiple datacenters out of the box. This means users of Consul do not have to worry about building additional layers of abstraction to grow to multiple regions. Consul is designed to be friendly to both the DevOps community and application developers, making it perfect for modern, elastic infrastructures."
Pablo Lalloni

Portable Cloud Programming with Go Cloud - The Go Blog - 0 views

  •  
    "We have identified common services used by cloud applications and have created generic APIs to work across cloud providers. Today, Go Cloud is launching with blob storage, MySQL database access, runtime configuration, and an HTTP server configured with request logging, tracing, and health checking. Go Cloud offers support for Google Cloud Platform (GCP) and Amazon Web Services (AWS). We plan to work with cloud industry partners and the Go community to add support for additional cloud providers very soon. "
Pablo Lalloni

FreeIPA - 0 views

  •  
    "FreeIPA is an integrated security information management solution combining Linux (Fedora), 389 Directory Server, MIT Kerberos, NTP, DNS, Dogtag (Certificate System). It consists of a web interface and command-line administration tools. FreeIPA is an integrated Identity and Authentication solution for Linux/UNIX networked environments. A FreeIPA server provides centralized authentication, authorization and account information by storing data about user, groups, hosts and other objects necessary to manage the security aspects of a network of computers. FreeIPA is built on top of well known Open Source components and standard protocols with a very strong focus on ease of management and automation of installation and configuration tasks. Multiple FreeIPA servers can easily be configured in a FreeIPA Domain in order to provide redundancy and scalability. The 389 Directory Server is the main data store and provides a full multi-master LDAPv3 directory infrastructure. Single-Sign-on authentication is provided via the MIT Kerberos KDC. Authentication capabilities are augmented by an integrated Certificate Authority based on the Dogtag project. Optionally Domain Names can be managed using the integrated ISC Bind server. Security aspects related to access control, delegation of administration tasks and other network administration tasks can be fully centralized and managed via the Web UI or the ipa Command Line tool."
Pablo Lalloni

kubernetes-incubator/external-dns: Configure external DNS servers (AWS Route53, Google ... - 0 views

  •  
    "Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services"
Pablo Lalloni

impetus-opensource/Kundera - 0 views

  •  
    "The idea behind Kundera is to make working with NoSQL Databases drop-dead simple and fun. Kundera is being developed with following objectives: To make working with NoSQL as simple as working with SQL To serve as JPA Compliant mapping solution for NoSQL Datastores. To help developers, forget the complexity of NoSQL stores and focus on Domain Model. To make switching across data-stores as easy as changing a configuration. "
Pablo Lalloni

robbyrussell/oh-my-zsh - 0 views

  •  
    A community-driven framework for managing your zsh configuration. Includes 40+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, macports, etc), over 80 terminal themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community.
  •  
    Excelentes configuraciones de prompt y auto-completar para trabajo con git y git-flow!
Pablo Lalloni

InfoQ: Grails Best Practices - 0 views

  • Prefer dynamic scaffolding to static scaffolding until the former no longer satisfies your requirements. For example, if only “save” action needs to be modified, you can override just that “save” action and generate scaffolded code dynamically at runtime.
  • To install any plugin in your application, it's better to declare it in BuildConfig.groovy rather than using the install-plugin command. Read this thread for a detailed explanation.
  • Always ensure that you include an externalized config file (even if it's an empty file), so that any configuration that needs to be overridden on production can be done without even generating a new war file.
  • ...2 more annotations...
  • Keep personal settings (such as local database username or passwords, etc) in a <Local>Config.groovy file and add to version control ignore list, so that each team member can override configuration as per their specific needs.
  • In Grails 2.0 “grails.hibernate.cache.queries = true" by default, which caches queries automatically without a need to add cache:true. Set it to false, and cache only when it genuinely helps performance.
  •  
    This article is a basic list of best practices that our Grails projects follow, gathered from mailing lists, Stack Overflow, blogs, podcasts and internal discussions at IntelliGrape.
Pablo Lalloni

Home | Dropwizard - 1 views

  •  
    Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services. Dropwizard has out-of-the-box support for sophisticated configuration, application metrics, logging, operational tools, and much more, allowing you and your team to ship a production-quality HTTP+JSON web service in the shortest time possible.
Pablo Lalloni

CopyFS - 1 views

  •  
    CopyFS aims to solve a common problem : given a directory, especially one full of configuration files, or other files that one can modify, and which can affect the functionning of a system, or of programs, that may be important to other users (or to the user himself), how to be sure that a person modifying the files will do a backup of the working version first ? This filesystem solves the problem by making the whole process transparent, automatically keeping versionned copies of all the changes done to file under its control. It also allows a user to select an old version of the files, for example to repair a mistake, and allows him/her to continue edition from this point.
Pablo Lalloni

Hadoop Operations - 3 views

  •  
    If you've been tasked with the job of maintaining large and complex Hadoop clusters, or are about to be, this book is a must. You'll learn the particulars of Hadoop operations, from planning, installing, and configuring the system to providing ongoing maintenance.
Pablo Lalloni

Rationale - Datomic - 0 views

  •  
    "Datomic is a distributed database designed to enable scalable, flexible and intelligent applications, running on next-generation cloud architectures. It does this by: Bringing declarative data manipulation into the application, and the data with it Getting time, process and perception right Process (writes) require coordination Perception (reads) require none The past doesn't change Leveraging immutability, and a sound model of state Datomic has: ACID Transactions Joins A sound data model A logical query language - Datalog Thus, Datomic avoids the compromises and losses of many NoSQL solutions. In addition, it offers flexibility and power over the traditional model in supporting: Hierarchy Multi-valued attributes Minimal schema Reliable operation on unreliable, ephemeral cloud instances Time Datomic avoids manual caching and replication, complex configuration, sharding (automatic or manual), logging, locking, latching and disk management of traditional servers."
Pablo Lalloni

NixOS Linux - 0 views

  •  
    "NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages."
1 - 20 of 51 Next › Last »
Showing 20 items per page