Skip to main content

Home/ Groups/ BI-TAGS
cezarovidiu

Setting Up a Server Linux article - 0 views

  • Installing and Configuring Apache
  • apt-get install apache2
  • etc/init.d/apache2 start|stop|restart|reload|force-reload apacheclt start|stop|restart|...
  • ...19 more annotations...
  • Apache2's configuration files, by default, are in /etc/apache2/.
  • apache2.conf is where the main configuration is
  • it used to be httpd.conf, so don't be fooled.
  • Installing and Configuring MySQL
  • apt-get install mysql-server-5.0
  • /etc/init.d/mysql start|stop|restart|reload|force-reload|status
  • To first start working with this database, the root password must be set. The word root does not apply to the system's root, but to the database administrator, however, it can be the same person. So let's set it and log in: mysqladmin -u root password 'thepassword' mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 10 to server version: 5.0.20a-Debian_1-log Type 'help;' or 'h' for help. Type 'c' to clear the buffer.
  • /etc/mysql/my.cnf
  • apt-get install php5
  • Installing and Configuring PHP
  • Now, add support for MySQL: apt-get install php5-mysql
  • Just like for Apache and MySQL, extra packages will have to be install as well: apache2-mpm-prefork, libapache2-mod-php5 and php5-common.
  • The configuration file for PHP is located in /etc/php5/apache2/php.ini
  • every time you modify it, Apache must be restarted.
  • /var/www/information.php
  • Installing and Configuring Postfix
  • The default configuration files are in /etc/postfix, we will only use main.cf
  • apt-get install php5-cli php-pear php5-ldap php5-imap php5-gd php5-mhash php5-odbc php5-ps
  • I also like to add some more packages for PHP, such as CLI, Pear, LDAP, IMAP, GD, mhash, ODBC and PostScript:
cezarovidiu

Static IP Address Assignment - 0 views

  • Static IP Address Assignment To configure your system to use a static IP address assignment, add the static method to the inet address family statement for the appropriate interface in the
  • file /etc/network/interfaces.
  • The example below assumes you are configuring your first Ethernet interface identified as eth0. Change the address, netmask, and gateway values to meet the requirements of your network. auto eth0 iface eth0 inet static address 10.0.0.100 netmask 255.255.255.0 gateway 10.0.0.1
cezarovidiu

How Do I Enable Remote Access To MySQL Database Server? - MySQL - ServerGrove Support - 0 views

  • Step 3: Edit the my.cnf file
  • Search for the following line:[mysqld] Make sure line skip-networking is commented (or remove line) and add following line:bind-address=YOUR-SERVER-IPSo if your IP is 69.195.199.51 the entire block should look like this:[mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer_size = 16K max_allowed_packet = 1M table_open_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 128K bind-address = 69.195.199.51 # skip-networking
  • Step 4: Save & RestartSave your edits by clicking on the Save button and restart MySQL by clicking RestartStep 5: Grant access to remote IP addressGo to the terminal in the control panel and log in (or connect via SSH) and connect to your MySQL database.$ mysql -u root -p mysqlGrant access to a new database If you want to add a new database called foo for user bar and remote IP 69.195.199.100 then you need to type the following commands at mysql> prompt:mysql> CREATE DATABASE foo; mysql> GRANT ALL ON foo.* TO bar@'69.195.199.100' IDENTIFIED BY 'PASSWORD';How Do I Grant Access To An Existing Database? To grant access to an existng database called foo for user bar and remote IP 69.195.199.100 type the following command At mysql> prompt for existing database, enter:mysql> GRANT ALL ON foo.* TO bar@'69.195.199.100' IDENTIFIED BY 'PASSWORD';
cezarovidiu

Install Oracle Java 7 in Ubuntu 12.10/12.04/11.10/Any Ubuntu or Linux Mint Version ~ No... - 0 views

  • For 64-bit users: wget -O jdk-64bit.tar.gz http://goo.gl/MSzBj wget -O jre-64bit.tar.gz http://goo.gl/yZgjI sudo -s cp -r jre-64bit.tar.gz /usr/local/java sudo -s cp -r jdk-64bit.tar.gz /usr/local/java cd /usr/local/java sudo -s chmod a+x jre-64bit.tar.gz sudo -s chmod a+x jdk-64bit.tar.gz sudo -s tar xvzf jre-64bit.tar.gz sudo -s tar xvzf jdk-64bit.tar.gz
  • sudo nano /etc/profile Add the following lines at the end of file: JAVA_HOME=/usr/local/java/jdk* PATH=$PATH:$HOME/bin:$JAVA_HOME/bin JRE_HOME=/usr/local/java/jre* PATH=$PATH:$HOME/bin:$JRE_HOME/bin export JAVA_HOME export JRE_HOME export PATH
  • Now enter following commands one by one in terminal: sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jre1.7.0_12/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_12/bin/javac" 1 sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/local/java/jre1.7.0_12/bin/javaws" 1 sudo update-alternatives --set java /usr/local/java/jre1.7.0_12/bin/java sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_12/bin/javac sudo update-alternatives --set javaws /usr/local/java/jre1.7.0_12/bin/javaws . /etc/profile
  • ...1 more annotation...
  • Check installed java version java -version
cezarovidiu

How to migrate Liferay portal from one windows machine to other? - Stack Overflow - 0 views

  •  
    "These are the steps that I have followed and able to migrate the Liferay successfully: Take the backup of Liferay files and database from first windows machine. Install the same version of Liferay (Say Liferay 5.2.3) on second windows machine. Shut down Liferay. Import the database on new system. Add portal-ext.properties with relevant entries. (e.g Datbase Name, User Name , Pasword etc) Add \liferay-portal-5.2.3\data\document_library files from old machine. Start the tomcat. It will automtically do the rest. NOTE: In the above method I have not deployed Theme and custom plugins etc, you have to deploy Theme and custom plugins also that are used on old system."
cezarovidiu

mysql - Best of MyISAM and InnoDB - Database Administrators - 0 views

  • Some people can make the table's row format FIXED using ALTER TABLE mydb.mytb ROW_FORMAT=Fixed; and can get a 20% increase in read performance without any other changes. This works and works effectively FOR MyISAM. This will not produce faster results for InnoDB because ... that's right ... you must consult the gen_clust_index each time.
cezarovidiu

Moving Sugar to Another Server - SugarCRM Support Site - 0 views

    • cezarovidiu
       
      japtone   Senior Member Join Date Nov 2010 Posts 49  Re: Transferring SugarCRM to a new server If you're using Linux try to have the same version of PHP, Apache, and DB (MySQL for instance) in order to avoid compatibility issues. In your production server tar up the sugarcrm root directory, transfer it to the new server and untar wherever your new root directory will be.  Next take a db dump of your database, transfer it to the new server and do a restore. Make sure apache is configured on the new server to point to the root of sugarcrm and start it up.  Make sure to modify config.php to account for any change in paths and hostname.  that's what I've found to be the easiest way to 'clone' sugar.
  • mysqldump -h localhost -u [MySQL user, e.g. root] -p[database password] -c --add-drop-table --add-locks --all --quick --lock-tables [name of the database] > sqldump.sql
  • Extract the Database
  • ...5 more annotations...
  • Copy Filesystem Copy all your files to the new server.  This can be done simply by locating the root directory on your old instance and copy and pasting it to the new server location.
  • Import Database Import the mysql database into the new server.  Here's how you would restore your custback.sql file to the Customers database. mysql -u sadmin -p pass21 Customers < custback.sql Here's the general format you would follow: mysql -u [username] -p [password] [database_to_restore] < [backupfile]
  • Check Files and Permissions Check Config.php Open <sugarroot/config.php> and make sure that all settings still apply to the new server, such as: array ( 'db_host_name' => 'localhost', 'db_user_name' => 'root', 'db_password' => 'PASSWORD', 'db_name' => 'DATABASE_NAME', 'db_type' => 'mysql', ), 'site_url' =>, etc...
  • Check htaccess Open <sugarroot/.htaccess> and ensure that the new server URLs are used correctly.
  • Check Permissions Check that the permissions are correct on the new server. That is the entire custom and cache directories (and all the sub directories) in addition to the config.php file are owned and writable by the user that runs the application on the server.
cezarovidiu

:-) - 0 views

shared by cezarovidiu on 20 Mar 13 - No Cached
cezarovidiu

16.4.2. Replication Compatibility Between MySQL Versions - 0 views

  • MySQL supports replication from one major version to the next higher major version. For example, you can replicate from a master running MySQL 4.1 to a slave running MySQL 5.0, from a master running MySQL 5.0 to a slave running MySQL 5.1, and so on.
  • However, one may encounter difficulties when replicating from an older master to a newer slave if the master uses statements or relies on behavior no longer supported in the version of MySQL used on the slave. For example, in MySQL 5.5, CREATE TABLE ... SELECT statements are permitted to change tables other than the one being created, but are no longer allowed to do so in MySQL 5.6 (see Section 16.4.1.4, “Replication of CREATE TABLE ... SELECT Statements”).
  • Important It is strongly recommended to use the most recent release available within a given MySQL major version because replication (and other) capabilities are continually being improved. It is also recommended to upgrade masters and slaves that use early releases of a major version of MySQL to GA (production) releases when the latter become available for that major version.
cezarovidiu

How To Install MySQL 5.6 On Ubuntu 12.10 (Including memcached Plugin) | HowtoForge - Li... - 0 views

  • sudo su
  • /etc/init.d/apparmor stop update-rc.d -f apparmor remove apt-get remove apparmor apparmor-utils
  • groupadd mysql useradd -r -g mysql mysql
  • ...6 more annotations...
  • apt-get install libaio1
  • tar xvfz
  • mv mysql-5.6.8-rc-linux2.6-x86_64 mysql cd mysql chown -R mysql . chgrp -R mysql .
  • my.cnf file inside /usr/local/mysql
  • We will install MySQL in the /usr/local/mysql directory (with /usr/local/mysql/data being the data directory, i.e., the directory which will contain the databases).
cezarovidiu

EnablingUseOfApacheHtaccessFiles - Community Ubuntu Documentation - 0 views

  • Example Here is an example on how to prevent users from access the directory, password-protect a specific file and allow userse to view a specific file: AuthUserFile /your/path/.htpasswd AuthName "Authorization Required" AuthType Basic Order Allow,Deny <Files myfile1.html> Order Allow,Deny require valid-user </Files> <Files myfile2.html> Order Deny,Allow </Files>
  •  
    "Password-Protect a Directory With .htaccess"
cezarovidiu

FREE PDF Printer - 0 views

  • Support for Windows Terminal Server
cezarovidiu

curl and libcurl - 0 views

cezarovidiu

You Probably Need Parallel Except When You Don't - 0 views

  • f you are running a large Oracle data warehouse you should be using parallel
  • Like all tools you have to use parallel correctly; no more would we think of using a wrench to hammer a nail then should you think parallel is the answer to all performance problems. Sometimes parallel will make things worse, sometimes parallel will make performance less predictable.
  • Parallel introduces additional work to a query, simplistically we need to: split the query into multiple parallel processes, execute them, wait for the processes to complete and finally coordinate the results. This all takes time to do. Our time saving comes from being able to process multiple smaller chunks of data simultaneously. If the time to execute the step in parallel is not significantly faster than doing it without parallel then the additional overhead may make parallel processing a slower option; this is typically the case with small tables where a full tablescan or an indexed access is fast. Use too few parallel processes and we will not gain much in performance, too many and we risk starving the database of resource for other work or even slow our own process as it waits for resource. If you have implemented some form of CPU resource management on your system you may find that you experience delays as your parallel slaves ‘wait their turn’
cezarovidiu

Google Reader (250) - 0 views

  • What this means in practice is that when the BI Server component starts up, it creates and reserves a number of threads in advance, determined by a number of parameters including SERVER_THREAD_RANGE.
  • You can see these threads running and ready to perform tasks for the BI Server component by using a tool such as Process Explorer for Windows
  • Thinking it through a bit, any given single query is, to a certain extent, only really going to use a small part of the total amount of CPUs available on a server, because it’s not the BI Server that runs queries in parallel, it’s the underlying database. For example, a single analysis against a single Oracle Database datasource would only really need a single BI Server thread to handle the query request, but when the underlying database receives the query, it might use a large number of its CPUs to process the query, returning results back to the BI Server to then pass back to the Presentation Server for display to the user.
  • ...2 more annotations...
  • The BI Server wouldn’t have any use for any more query threads, as it can’t really do anything with them – the exception to this being queries that generate multiple physical SQLs, for example to join data from multiple sources together and return a single set of data to the user, for which the BI Server could benefit from a higher CPU count if each of these queries in turn led to lots of threads being used – but two queries, in themselves, don’t neccessarily require two CPUs, because of course the BI Server, and the underlying CPUs, are themselves multi-threaded.
  • To conclude then – all things begin equal, the BI Server should make use of all of the CPUs that the underlying operating system presents to it, with the OS itself deciding what threads are scheduled against which CPUs. In-theory, all CPUs on the server are available to each BI Server component, but each OS is different and it might be worth experimenting if you’re sure that certain CPUs aren’t being used – but this is most probably unlikely and the main reason you’d really consider vertical scale-out of BI Server components is for fault-tolerance, or if you’re using a 32-bit OS and each process can only see a subset of the total overall memory. And, bear in mind that however many CPUs the BI Server has available to it, for queries that send just a single SQL statement down to the underlying database server, adding more CPUs or faster CPUs isn’t going to help as only a single (or so) thread will be needed to send the query from the BI Server to the database, and it’s the database that’s doing all of the work – all that this would help with is compilation and post-aggregation work, and enabling the server to handle a higher number of concurrent users. Invest in a better underlying database instead, sort out your data model, and make sure your data source back-end is as optimised as possible.
« First ‹ Previous 661 - 680 of 767 Next › Last »
Showing 20 items per page