models directory is meant to hold tests for your models
controllers directory is meant to hold tests for your controllers
integration directory is meant to hold tests that involve any number of controllers interacting
Fixtures are a way of organizing test data; they reside in the fixtures folder
The test_helper.rb file holds the default configuration for your tests
Fixtures allow you to populate your testing database with predefined data before your tests run
Fixtures are database independent written in YAML.
one file per model.
Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
define a reference node between two different fixtures.
ERB allows you to embed Ruby code within templates
The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
Fixtures are instances of Active Record.
access the object directly
test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
test with method names prefixed with test_.
An assertion is a line of code that evaluates an object (or expression) for expected results.
bin/rake db:test:prepare
Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
rake test command
run a particular test method from the test case by running the test and providing the test method name.
The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).
"This article is about lexmachine, a library I wrote to help you write great lexers in Go. If you are looking to write a golang lexer or a lexer in golang this article is for you."
the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.
The x-hack is effective because no unary operators can start with x.
the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!
The Dash issue of [ "(" = ")" ] was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today
In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
Filebeat is just a tiny binary with no dependencies.
For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
For example, the apache module will point Filebeat to default access.log and error.log paths
Filebeat’s scope is very limited,
Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
Filebeat can parse JSON
you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
rsyslog is the fastest shipper
Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
use it as a simple router/shipper, any decent machine will be limited by network bandwidth
It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
rsyslog requires more work to get the configuration right
the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
rsyslog also works well when you need that ultimate performance.
syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
a modular syslog daemon, that can do much more than just syslog
Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
Fluentd plugins are in Ruby and very easy to write.
structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
Splunk isn’t a log shipper, it’s a commercial logging solution
Graylog is another complete logging solution, an open-source alternative to Splunk.
everything goes through graylog-server, from authentication to queries.
Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.