Skip to main content

Home/ Larvata/ Group items tagged interactive

Rss Feed Group items tagged

crazylion lee

GNU Octave - 0 views

  •  
    "GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable."
crazylion lee

Leaflet - a JavaScript library for interactive maps - 0 views

  •  
    "an open-source JavaScript library for mobile-friendly interactive maps"
張 旭

Helm | - 0 views

  • Helm is a tool for managing Kubernetes packages called charts
  • Install and uninstall charts into an existing Kubernetes cluster
  • The chart is a bundle of information necessary to create an instance of a Kubernetes application.
  • ...12 more annotations...
  • The config contains configuration information that can be merged into a packaged chart to create a releasable object.
  • A release is a running instance of a chart, combined with a specific config.
  • The Helm Client is a command-line client for end users.
  • Interacting with the Tiller server
  • The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server.
  • Combining a chart and configuration to build a release
  • Installing charts into Kubernetes, and then tracking the subsequent release
  • the client is responsible for managing charts, and the server is responsible for managing releases.
  • The Helm client is written in the Go programming language, and uses the gRPC protocol suite to interact with the Tiller server.
  • The Tiller server is also written in Go. It provides a gRPC server to connect with the client, and it uses the Kubernetes client library to communicate with Kubernetes.
  • The Tiller server stores information in ConfigMaps located inside of Kubernetes.
  • Configuration files are, when possible, written in YAML.
  •  
    "Helm is a tool for managing Kubernetes packages called charts"
張 旭

A Guide to Testing Rails Applications - Ruby on Rails Guides - 0 views

  • Rails tests can also simulate browser requests and thus you can test your application's response without having to test it through your browser.
  • your tests will need a database to interact with as well.
  • By default, every Rails application has three environments: development, test, and production
  • ...25 more annotations...
  • models directory is meant to hold tests for your models
  • controllers directory is meant to hold tests for your controllers
  • integration directory is meant to hold tests that involve any number of controllers interacting
  • Fixtures are a way of organizing test data; they reside in the fixtures folder
  • The test_helper.rb file holds the default configuration for your tests
  • Fixtures allow you to populate your testing database with predefined data before your tests run
  • Fixtures are database independent written in YAML.
  • one file per model.
  • Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
  • Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
  • define a reference node between two different fixtures.
  • ERB allows you to embed Ruby code within templates
  • The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
  • Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
  • Fixtures are instances of Active Record.
  • access the object directly
  • test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
  • test with method names prefixed with test_.
  • An assertion is a line of code that evaluates an object (or expression) for expected results.
  • bin/rake db:test:prepare
  • Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
  • rake test command
  • run a particular test method from the test case by running the test and providing the test method name.
  • The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
  • we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).
crazylion lee

eMotion | Adrien M / Claire B - 0 views

  •  
    "eMotion est une application fonctionnant sur OS X destinée à écrire des interactions entre des objets et des informations. Elle se base sur des modèles physiques pour animer des situations. "
crazylion lee

Home - Automated interactive transcription tool - Trint ∙ Transforming Talk - 0 views

  •  
    "The toolkit that lets you transcribe, search, edit and share media content online."
crazylion lee

Google Web Designer - 0 views

  •  
    "Create engaging, interactive HTML5-based designs and motion graphics that can run on any device. "
crazylion lee

Bot Framework - 0 views

  •  
    " Build and connect intelligent bots to interact with your users naturally wherever they are, from text/sms to Skype, Slack, Office 365 mail and other popular services."
crazylion lee

Computer Science Field Guide - Computer Science Field Guide - 0 views

  •  
    "An online interactive resource for high school students learning about computer science"
張 旭

Understanding Ruby Blocks, Procs and Lambdas - Robert Sosinski - 0 views

  • Ruby has four different ways of using closures
  • The code block interacts with a variable
  • collect! will use the code provided within the block on each element in the array
  • ...10 more annotations...
  • do not need to specify the name of blocks within your methods
  • use the yield keyword. Calling this keyword will execute the code within the block provided to the method
  • A block is just a Proc!
  • saving reusable code as an object itself. This reusable code is called a Proc (short for procedure)
  • The only difference between blocks and Procs is that a block is a Proc that cannot be saved, and as such, is a one time use solution
  • a bang at the end
  • That value is now available to the block and returned by the yield call
  • The block has the number available (also called n)
  • a flexible way to interact with our method
  • an ampersand argument
crazylion lee

Vega - 0 views

  •  
    "Vega is a declarative format for creating, saving, and sharing visualization designs. With Vega, visualizations are described in JSON, and generate interactive views using either HTML5 Canvas or SVG."
crazylion lee

Keepalived for Linux - 0 views

  •  
    Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures.
張 旭

The Rails Command Line - Ruby on Rails Guides - 0 views

  • rake --tasks
  • Think of destroy as the opposite of generate.
  • runner runs Ruby code in the context of Rails non-interactively
  • ...28 more annotations...
  • rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
  • The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
  • rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
  • You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
  • rake db:version is useful when troubleshooting
  • The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
  • rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
  • You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
  • rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
  • rake secret will give you a pseudo-random key to use for your session secret.
  • Custom rake tasks have a .rake extension and are placed in Rails.root/lib/tasks.
  • rails new . --git --database=postgresql
  • All commands can run with -h or --help to list more information
  • The rails server command launches a small web server named WEBrick which comes bundled with Ruby
  • rails server -e production -p 4000
  • You can run a server as a daemon by passing a -d option
  • The rails generate command uses templates to create a whole lot of things.
  • Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
  • All Rails console utilities have help text.
  • generate controller ControllerName action1 action2.
  • With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
  • A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
  • Unit tests are code that tests and makes assertions about code.
  • Unit tests are your friend.
  • rails console --sandbox
  • rails db
  • Each task has a description, and should help you find the thing you need.
  • rake tmp:clear clears all the three: cache, sessions and sockets.
張 旭

Run Reference - Docker Documentation - 0 views

  • In detached mode (-d=true or just -d), all I/O should be done through network connections or shared volumes because the container is no longer listening to the command line where you executed docker run.
  • start the process in the container and attach the console to the process's standard input, output, and standard error. It can even pretend to be a TTY (this is what most command line executables expect) and pass along signals.
  • For interactive processes (like a shell) you will typically want a tty as well as persistent standard input (STDIN), so you'll use -i -t together in most interactive cases.
crazylion lee

Protractor - end to end testing for AngularJS - 0 views

  •  
    "Protractor is an end-to-end test framework for AngularJS applications. Protractor runs tests against your application running in a real browser, interacting with it as a user would."
crazylion lee

Welcome to the Mink documentation! - Mink 1.6 documentation - 0 views

  •  
    "One of the most important parts in the web is a browser. A browser is the window through which web users interact with web applications and other users. Users are always talking with web applications through browsers. "
張 旭

Secrets - Kubernetes - 0 views

  • Putting this information in a secret is safer and more flexible than putting it verbatim in a PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application. .
  • A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
  • Users can create secrets, and the system also creates some secrets.
  • ...63 more annotations...
  • To use a secret, a pod needs to reference the secret.
  • A secret can be used with a pod in two ways: as files in a volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
  • --from-file
  • You can also create a Secret in a file first, in json or yaml format, and then create that object.
  • The Secret contains two maps: data and stringData.
  • The data field is used to store arbitrary data, encoded using base64.
  • Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
  • kubectl get and kubectl describe avoid showing the contents of a secret by default.
  • stringData field is provided for convenience, and allows you to provide secret data as unencoded strings.
  • where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.
  • a field is specified in both data and stringData, the value from stringData is used.
  • The keys of data and stringData must consist of alphanumeric characters, ‘-’, ‘_’ or ‘.’.
  • Newlines are not valid within these strings and must be omitted.
  • When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines.
  • create a Secret from generators and then apply it to create the object on the Apiserver.
  • The generated Secrets name has a suffix appended by hashing the contents.
  • base64 --decode
  • Secrets can be mounted as data volumes or be exposed as environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod. to be used by a container in a pod.
  • Multiple pods can reference the same secret.
  • Each key in the secret data map becomes the filename under mountPath
  • each container needs its own volumeMounts block, but only one .spec.volumes is needed per secret
  • use .spec.volumes[].secret.items field to change target path of each key:
  • If .spec.volumes[].secret.items is used, only keys specified in items are projected. To consume all keys from the secret, all of them must be listed in the items field.
  • You can also specify the permission mode bits files part of a secret will have. If you don’t specify any, 0644 is used by default.
  • JSON spec doesn’t support octal notation, so use the value 256 for 0400 permissions.
  • Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files.
  • Mounted Secrets are updated automatically
  • Kubelet is checking whether the mounted secret is fresh on every periodic sync.
  • cache propagation delay depends on the chosen cache type
  • A container using a Secret as a subPath volume mount will not receive Secret updates.
  • Multiple pods can reference the same secret.
  • env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
  • Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data.
  • An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
  • a secret needs to be created before any pods that depend on it.
  • Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. . They can only be referenced by pods in that same namespace.
  • Individual secrets are limited to 1MiB in size.
  • Kubelet only supports use of secrets for Pods it gets from the API server.
  • Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional.
  • References to Secrets that do not exist will prevent the pod from starting.
  • References via secretKeyRef to keys that do not exist in a named Secret will prevent the pod from starting.
  • Once a pod is scheduled, the kubelet will try to fetch the secret value.
  • Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
  • volumes: - name: secret-volume secret: secretName: ssh-key-secret
  • Special characters such as $, \*, and ! require escaping. If the password you are using has special characters, you need to escape them using the \\ character.
  • You do not need to escape special characters in passwords from files
  • make that key begin with a dot
  • Dotfiles in secret volume
  • .secret-file
  • a frontend container which handles user interaction and business logic, but which cannot see the private key;
  • a signer container that can see the private key, and responds to simple signing requests from the frontend
  • When deploying applications that interact with the secrets API, access should be limited using authorization policies such as RBAC
  • watch and list requests for secrets within a namespace are extremely powerful capabilities and should be avoided
  • watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.
  • additional precautions with secret objects, such as avoiding writing them to disk where possible.
  • A secret is only sent to a node if a pod on that node requires it
  • only the secrets that a pod requests are potentially visible within its containers
  • each container in a pod has to request the secret volume in its volumeMounts for it to be visible within the container.
  • In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
  • limit access to etcd to admin users
  • Base64 encoding is not an encryption method and is considered the same as plain text.
  • A user who can create a pod that uses a secret can also see the value of that secret.
  • anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet.
張 旭

Running Terraform in Automation | Terraform - HashiCorp Learn - 0 views

  • In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory.
  • allows each configuration to potentially use different versions of plugins.
  • In automation environments, it can be desirable to disable this behavior and instead provide a fixed set of plugins already installed on the system where Terraform is running. This then avoids the overhead of re-downloading the plugins on each execution
  • ...12 more annotations...
  • the desire for an interactive approval step between plan and apply.
  • terraform init -input=false to initialize the working directory.
  • terraform plan -out=tfplan -input=false to create a plan and save it to the local file tfplan.
  • terraform apply -input=false tfplan to apply the plan stored in the file tfplan.
  • the environment variable TF_IN_AUTOMATION is set to any non-empty value, Terraform makes some minor adjustments to its output to de-emphasize specific commands to run.
  • it can be difficult or impossible to ensure that the plan and apply subcommands are run on the same machine, in the same directory, with all of the same files present.
  • to allow only one plan to be outstanding at a time.
  • forcing plans to be approved (or dismissed) in sequence
  • -auto-approve
  • The -auto-approve option tells Terraform not to require interactive approval of the plan before applying it.
  • obtain the archive created in the previous step and extract it at the same absolute path. This re-creates everything that was present after plan, avoiding strange issues where local files were created during the plan step.
  • a "build artifact"
  •  
    "In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory. "
1 - 20 of 44 Next › Last »
Showing 20 items per page