Skip to main content

Home/ Larvata/ Group items tagged transfer

Rss Feed Group items tagged

crazylion lee

transfer.sh - Easy and fast file sharing from the command-line. - 1 views

  •  
    "Easy file sharing from the command line"
crazylion lee

Instant.io - Streaming file transfer over WebTorrent - 0 views

shared by crazylion lee on 27 Sep 16 - No Cached
  •  
    "Streaming file transfer over WebTorrent (torrents on the web)"
crazylion lee

Nmap: the Network Mapper - Free Security Scanner - 1 views

shared by crazylion lee on 22 Nov 15 - No Cached
  •  
    "Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."
張 旭

Docker for AWS persistent data volumes | Docker Documentation - 0 views

  • Cloudstor is a modern volume plugin built by Docker
  • Docker swarm mode tasks and regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume.
  • Global shared Cloudstor volumes mounted by all tasks in a swarm service.
  • ...14 more annotations...
  • Workloads running in a Docker service that require access to low latency/high IOPs persistent storage, such as a database engine, can use a relocatable Cloudstor volume backed by EBS.
  • Each relocatable Cloudstor volume is backed by a single EBS volume.
  • If a swarm task using a relocatable Cloudstor volume gets rescheduled to another node within the same availability zone as the original node where the task was running, Cloudstor detaches the backing EBS volume from the original node and attaches it to the new target node automatically.
  • in a different availability zone,
  • Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone.
  • Typically the snapshot-based transfer process across availability zones takes between 2 and 5 minutes unless the work load is write-heavy.
  • A swarm task is not started until the volume it mounts becomes available
  • Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and leads to data loss.
  • a Cloudstor volume to share data between tasks, choose the appropriate EFS backed shared volume option.
  • When multiple swarm service tasks need to share data in a persistent storage volume, you can use a shared Cloudstor volume backed by EFS.
  • a volume and its contents can be mounted by multiple swarm service tasks without the risk of data loss
  • over NFS
  • the persistent data backed by EFS volumes is always available.
  • shared Cloudstor volumes only work in those AWS regions where EFS is supported.
crazylion lee

Laptop USB Console Adapter - CV211, ATEN KVM Cables - 0 views

  •  
    " 'Turn your laptop into a mobile console in a few seconds.' The CV211 Laptop USB Console Adapter provides a direct Laptop-to-Computer connection for fast and easy remote desktop access with no software to install. High efficiency in a compact design, the CV211 offers bi-directional file transfers, hotkey macros, video recording and screenshots through a USB 2.0 and VGA composite cable."
張 旭

Specification - Swagger - 0 views

shared by 張 旭 on 29 Jul 16 - No Cached
  • A list of parameters that are applicable for all the operations described under this path.
  • MUST NOT include duplicated parameters
  • this field SHOULD be less than 120 characters.
  • ...33 more annotations...
  • Unique string used to identify the operation.
  • The id MUST be unique among all operations described in the API.
  • A list of MIME types the operation can consume.
  • A list of MIME types the operation can produce
  • A unique parameter is defined by a combination of a name and location.
  • There can be one "body" parameter at most.
  • Required. The list of possible responses as they are returned from executing this operation.
  • The transfer protocol for the operation. Values MUST be from the list: "http", "https", "ws", "wss".
  • Declares this operation to be deprecated. Usage of the declared operation should be refrained. Default value is
  • A declaration of which security schemes are applied for this operation.
  • A unique parameter is defined by a combination of a name and location.
  • Path
  • Query
  • Header
  • Body
  • Form
  • Required. The location of the parameter. Possible values are "query", "header", "path", "formData" or "body".
  • the parameter value is actually part of the operation's URL
  • Parameters that are appended to the URL
  • The payload that's appended to the HTTP request.
  • Since there can only be one payload, there can only be one body parameter.
  • The name of the body parameter has no effect on the parameter itself and is used for documentation purposes only
  • body and form parameters cannot exist together for the same operation
  • This is the only parameter type that can be used to send files, thus supporting the file type.
  • If the parameter is in "path", this property is required and its value MUST be true.
  • default value is false.
  • The schema defining the type used for the body parameter.
  • The value MUST be one of "string", "number", "integer", "boolean", "array" or "file"
  • Default value is false
  • Required if type is "array". Describes the type of items in the array.
  • Determines the format of the array if type array is used
  • enum
  • pattern
crazylion lee

Freeze - the ultimate Amazon Glacier file transfer client for Mac - 0 views

  •  
    "Easy access to your archives from a convenient app"
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

cryptography - What's the difference between SSL, TLS, and HTTPS? - Information Securit... - 0 views

  • TLS is the new name for SSL
  • HTTPS is HTTP-within-SSL/TLS
  • SSL (TLS) establishes a secured, bidirectional tunnel for arbitrary binary data between two hosts
  • ...10 more annotations...
  • HTTP is meant to run over a bidirectional tunnel for arbitrary binary data; when that tunnel is an SSL/TLS connection, then the whole is called "HTTPS".
  • "SSL" means "Secure Sockets Layer".
  • "TLS" means "Transport Layer Security".
  • The name was changed to avoid any legal issues with Netscape so that the protocol could be "open and free" (and published as a RFC).
    • 張 旭
       
      看起來其實就指同一件事,只是講 TLS 可以避開 SSL 這個有產權糾紛的名諱。
  • not just Internet-based sockets
  • "HTTPS" is supposed to mean "HyperText Transfer Protocol Secure",
  • Other protocol acronyms have been built the same way, e.g. SMTPS, IMAPS, FTPS... all of them being a bare protocol that "got secured" by running it within some SSL/TLS.
  • To make the confusing perfect: SSL (secure socket layer) often refers to the old protocol variant which starts with the handshake right away and therefore requires another port for the encrypted protocol such as 443 instead of 80.
  • TLS (transport layer security) often refers to the new variant which allows to start with an unencrypted traditional protocol and then issuing a command (usually STARTTLS) to initialize the handshake.
  • Whether you use SSL or TLS for this depends on the configuration of your browser and of the server (there usually is an option to allow SSLv2, SSLv3 or TLS 1.x).
張 旭

Getting Started with MariaDB Galera Cluster - MariaDB Knowledge Base - 0 views

  • most users are not going to bootstrap a server by executing "mysqld --wsrep-new-cluster" manually.
  • galera_new_cluster
  • Prerequisites
  • ...7 more annotations...
  • Once you have a cluster running and you want to add/reconnect another node to it, you must supply an address of one of the cluster members in the cluster address URL
  • The new node only needs to connect to one of the existing members
  • It will automatically retrieve the cluster map and reconnect to the rest of the nodes
  • it's better to list all nodes of the cluster so that any node can join a cluster connecting to any other node, even if one or more are down
  • The wsrep_cluster_address parameter should be added in my.cnf of each node, listing all the nodes of the cluster,
  • the minimum recommended number of nodes in a cluster is 3
  • While two of the members will be engaged in state transfer, the remaining member(s) will be able to keep on serving client requests.
張 旭

BIND9 named.conf Zone Transfer and Update statements - 0 views

  • update-policy only applies to, and may only appear in, zone clauses. This statement defines the rules by which DDNS updates may be carried. It may only be used with a key (TSIG or SIG(0)) which is used to cryptographically sign each update request. It is mutually exclusive with allow-update in any single zone clause. The statement may take the keyword local or an update-policy-rule structure. The keyword local is designed to simplify configuration of secure updates using a TSIG key and limits the update source only to localhost (loopback address, 127.0.0.1 or ::1), thus both nsupdate (or any other application using DDNS) and the name server being updated must reside on the same host.
  •  
    "update-policy only applies to, and may only appear in, zone clauses. This statement defines the rules by which DDNS updates may be carried. It may only be used with a key (TSIG or SIG(0)) which is used to cryptographically sign each update request. It is mutually exclusive with allow-update in any single zone clause. The statement may take the keyword local or an update-policy-rule structure. The keyword local is designed to simplify configuration of secure updates using a TSIG key and limits the update source only to localhost (loopback address, 127.0.0.1 or ::1), thus both nsupdate (or any other application using DDNS) and the name server being updated must reside on the same host. "
張 旭

Howto/DNS updates and zone transfers with TSIG - FreeIPA - 0 views

  • dnssec-keygen -a HMAC-SHA512 -b 512 -n HOST keyname
  • vim /etc/named.conf
  • keyvalue
  • ...2 more annotations...
  • ipa dnszone-mod example.com. --update-policy="grant keyname name example.com A;"
    • 張 旭
       
      先執行 kinit admin
  • ipa dnszone-mod example.com. --dynamic-update=1
    • 張 旭
       
      ipa dnszone-show --all example.com.
張 旭

DNS - FreeIPA - 0 views

  • FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies.
  • Single-master DNS is error prone, especially for inexperienced admins.
  • a decent Kerberos experience.
  • ...14 more annotations...
  • Goal is NOT to provide general-purpose DNS server.
  • DNS component in FreeIPA is optional and user may choose to manage all DNS records manually in other third party DNS server.
  • Clients can be configured to automatically run DNS updates (nsupdate) when their IP address changes and thus keeping its DNS record up-to-date. DNS zones can be configured to synchronize client's reverse (PTR) record along with the forward (A, AAAA) DNS record.
  • It is extremely hard to change DNS domain in existing installations so it is better to think ahead.
  • You should only use names which are delegated to you by the parent domain.
  • Not respecting this rule will cause problems sooner or later!
  • DNSSEC validation.
  • For internal names you can use arbitrary sub-domain in a DNS sub-tree you own, e.g. int.example.com.. Always respect rules from the previous section.
  • General advice about DNS views is do not use them because views make DNS deployment harder to maintain and security benefits are questionable (when compared with ACL).
  • The DNS integration is based on the bind-dyndb-ldap project, which enhances BIND name server to be able to use FreeIPA server LDAP instance as a data backend (data are stored in cn=dns entry, using schema defined by bind-dyndb-ldap
  • FreeIPA LDAP directory information tree is by default accessible to any user in the network
  • As DNS data are often considered as sensitive and as having access to cn=dns tree would be basically equal to being able to run zone transfer to all FreeIPA managed DNS zones, contents of this tree in LDAP are hidden by default.
  • standard system log (/var/log/messages or system journal)
  • BIND configuration (/etc/named.conf) can be updated to produce a more detailed log.
  •  
    "FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies."
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

Language Server Protocol - Wikipedia - 0 views

  • Modern IDEs provide developers with sophisticated features like code completion, refactoring, navigating to a symbol's definition, syntax highlighting, and error and warning markers.
  • an IDE needs a sophisticated understanding of the programming language that the program's source is written in.
  • Conventional compilers or interpreters for a specific programming language are typically unable to provide these language services, because they are written with the goal of either transforming the source code into object code or immediately executing the code.
  • ...5 more annotations...
  • Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor.
  • The Language Server Protocol allows for decoupling language services from the editor so that the services may be contained within a general purpose language server.
  • LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications[7] or domain-specific languages (DSL).
  • When a user edits one or more source code files using a language server protocol-enabled tool, the tool acts as a client that consumes the language services provided by a language server.
  • The protocol does not make any provisions about how requests, responses and notifications are transferred between client and server.
1 - 16 of 16
Showing 20 items per page