Home Blog Page 10720

Libranet

Author: Benjamin D. Thomas

This top-notch distribution features a rich desktop
environment complete with popular applications. Designed for experienced and new users alike, Libranet delivers the promise of a GNU/Linux desktop
today.
This distribution will bring the best of GNU/Linux to your computer. Order or DOWNLOAD it now!

The Desktop

The focus of this distribution is ‘Linux on
the Desktop’. This excellent distribution, based on Debian, gives users a fully configured desktop complete with the best and most commonly used
applications and window managers. Linux by Libranet is the the most comprehensive desktop available.

User
   Friendly

Libranet is user friendly for new users and a quick track to an accomplished desktop for experienced users. The simple install
together with automatic configuration and selection of software packages, make the system easy to get up and running.

The Latest and Greatest

The best and
latest packages are included. Like IceWM window manager, the latest stable kernel and the KDE, Gnome and Enlightenment desktop environments.

  • Kernel 2.4.2
  • XFree86 4.0.1
  • KDE 2.1
  • ReiserFS
  • Libranet Adminmenu

Linux is Fun

We think that using a computer should be a fun filled and rewarding experience. We are thrilled with our desktop and
it’s gaining popularity and invite you to join us.

Please visit us at www.libranet.com

Get your’s on CD today!

DOWNLOAD it now!

Exploiting Amazon Web Services via PHP and SQLite

Author: Michael Stahnke

A few weeks ago a friend asked me how my book, Pro OpenSSH, was selling on Amazon.com. I was tracking the sales by going to Amazon.com and viewing the book page to examine the sales rank. The only data displayed about history information was today’s Sales Rank and Yesterday’s Sales Rank, which isn’t all that helpful. I decided to use PHP, SQLite, and the Amazon Web Services API to gather more useful data.

I thought it would be fun to track the sales rank over a period of time, then display a graph of the sales rank over time on a Web page.

You can gather data from Amazon in a number of ways. wget and grep could probably get the job done, but it is not elegant, nor is it encouraged by Amazon. The best way to get information is to use Amazon’s application programming interface (API).

Amazon’s Web Services (AWS) API offers a way to connect to the Amazon data warehouse and retrieve data about an Amazon item. To use the AWS API, you need to register with Amazon at the Amazon Web Services page. After registering, and accepting an end user license agreement (EULA), you will be given two keys: one for general access and requests, and one for verification and signing of requests. The general access key allows you to connect to the Amazon Web Services databases. The APIs are well-documented on the Amazon Web Services site.

I used the Amazon E-Commerce Service for my project to track sales rank on book titles over time. This service provides the ability to query an item via its Amazon Standard Identification Number (ASIN), International Standard Book Number (ISBN), author, artists, product name, publisher, or title, and retrieve virtually all information shown on the Amazon Web page about that item.

I started with an extremely simple PHP5 script that created the URL string you need to use with the Amazon Web service using Representational State Transfer (REST).

The PHP script is designed to run from the command line and POSTS a URL string. While you’re debugging the script, you can copy the URL string and paste it into a Web browser to verify that the Web services interaction is working appropriately. The following script shows the basic setup of the PHP script to query AWS.

<?
# Build URL that will query AWS
   $ACCESS_KEY =  'Access_Key';
  $asin =  '1590594762';
  $url =  'http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
  $url.=  "&AWSAccessKeyId=$ACCESS_KEY";
  $url.=  "&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
  $url.=  '&ResponseGroup=Medium,OfferFull';
  print   "<br />" . $url . "<br />";
?>

The output from this script is a URL you can enter in your browser. The browser will return some text formatted via XML. The XML schema for this text isn’t too complicated, and if you wanted to use an XSLT stylesheet, you could format the XML into HTML and have your presentation layer completed. However, my goal was not just to get information stored in Amazon’s database, but to store it myself so I can track the data over time.

For this I needed a data container. I had a few options for a data container in which to store statistics from the Web service queries. A relational database made the most sense, and PHP supports several. SQLite, introduced in PHP5, seemed like a nice choice, because SQLite is simple to administer and use.

Before you begin using SQLite, take a look at your PHP information and ensure that SQLite is supported by your configuration. If it is not, you can either compile the support into PHP or download an applicable package to add support for the database. Alternatively, you could use MySQL, PostgreSQL, Oracle, or another database.

Database setup

The database schema for this Web application involves two tables: one to track the unique Amazon Item Numbers (ASIN) and the initial date they were added into the tracking system, and the other to hold the ASIN, Sales Rank from Amazon, and datestamp for when the Sales Rank was updated. The small size of the database is a design feature.

The display page that shows the graphs, sales rank, and pricing information from Amazon will be updated upon display. That means we can pull the data, such as cover images, list price, description, title, and everything else dynamically. Amazon stores that information, so we don’t need to. Additionally, if the data changes, such as when price changes during a sale, the display page will have the updated information.

This is the basic schema for my SQLite database:

create table aws (
asin varchar(30),
sales_rank bigint,
active_date date);

create table item (
asin varchar(30) primary key);

Next, I added the ASIN into the item table manually via SQL. Obviously, you could write a PHP page to administer this portion of database interaction as well. After the initial script is modified to parse the XML and store the data into the database, you could set up the PHP script to run as a cron job. The script to fill the database will query the database to see what Amazon item numbers the script should be gathering statistics for. This allows for tracking of multiple items without any code changes, and thus does not lock the script into any hard-coded ASIN, as I used in the initial URL-building PHP code.

#!/usr/bin/php -q
<?php

  $DB="/var/www/db/aws.db";
  # Ensure database exists
  if (! file_exists($DB))
  {
    echo "The database file $DB not found.n";
    exit(05);
  }
  # Ensure database file is writable.
  elseif(is_writable($DB) != 1)
  {
    echo "Processing cannot continue, the database $DB cannot be written to.n";
    exit(06);
  }
  # Assumes basic Schema setup for $DB
  /* To get an appropriate database setup, simply run the following two lines
     of code inside an SQLITE prompt.
       create table item ( asin varchar(30) primary key);
      create table track ( asin varchar(30), sales_rank big_int, query_date date);
    To have an initial setup, an ASIN number must be entered into the 'item' table.
    In this case I have chosen my ISBN number for my book, Pro OpenSSH.
    Don't forget that rowid is kept internally in sqlite.
  */
  # Amazon Web Services access Key
  /* Get Amazon Web Services access Key (free) from http://aws.amazon.com
     The following key is not a working key, but used as an example.
  */
  $ACCESS_KEY='Access_Key';
  # Connect to database
  $dblink =  sqlite_open($DB) or die ("Couldn't connect to $DB");
  # Query database to find which ASINs to search on
  $sql = "SELECT asin FROM item ORDER BY asin";

 $resource_set = sqlite_query($dblink, $sql);
  $dt = date('Y-n-d H:i');
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
    # Value for ASIN
    $asin=$row['asin'];
    # Build URL to query based on ASIN and ACCESS_KEY
    $url='http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
    $url.="&AWSAccessKeyId=$ACCESS_KEY";
    $url.="&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
    $url.='&ResponseGroup=Medium,OfferFull';
    # Place the results into an XML string
    $xml= file_get_contents($url);
    # Use Simple XML to put results into Simple XML object (requires PHP5)
    $simple_xml=simplexml_load_string($xml);
    # Retrieve Sales Rank
    $sales_rank=$simple_xml->Items->Item->SalesRank;
    # Place Sales rank in Database
    # Build SQL statement to insert values into database
    $sql2 = "INSERT INTO aws (sales_rank,active_date,asin) VALUES ('$sales_rank', '$dt', '$asin')";
    # Ensure Results are received
    $insert_results = sqlite_query($dblink, $sql2);
    {
 # Check results
    if($insert_results)
       echo "Database $DB updated.n";
    }
  }
    else
    {
       echo "Database $DB update failed.n";
       exit(07);
    }
?>

After retrieving results from AWS, the script inserts the sales rank parameter along with a date and which ASIN the information correlates to into the aws table. This table will provide the data points for displaying graphs and other presentation material about an Amazon item.

The script parses the XML shown after using a URL similar to the one seen in the first PHP listing. The XML is then loaded into a string using PHP’s file_get_contents function. From there the XML is loaded into a SimpleXML data structure that is a very thorough set of associative arrays that can reference any value contained inside XML tags. To see the whole listing you can use the var_dump or print_r functionality of PHP.

After finding the pertinent information to store — Sales Rank in this case — we use an insert statement to create a record inside the local database. If we get an error in almost any stage of execution, we exit and return a non-zero error code.

The final step is in presentation. As stated earlier, using XSLT to parse the XML is certainly an option, but for this exercise, I will just use native PHP functionality in conjunction with SimpleXML.

I wanted to graph the sales rank over time to show the status of my book sales. To do this, I used the Image::Graph PHP Extension and Application Repository (PEAR) module.

To install Image::Graph, follow normal PEAR installation procedures. The installation was fairly easy on Fedora and Ubuntu Linux systems. The next script is the display.php page, which accesses the database and displays the sales rank in graph form. The system could be modified easily to track price or albums from your favorite artist, or other items.

<?php
  #$Id$
  include 'Image/Graph.php';
  $DB="/var/www/db/aws.db";
  # Ensure database exists
  if (! file_exists($DB))
  {
    echo "The database file $DB not found.n";
    exit(05);
  }
  # Ensure database file is writable.
  elseif(is_writable($DB) != 1)
  {
    echo "Processing cannot continue, the database $DB cannot be written to.n";
    exit(06);
  }
  $ACCESS_KEY='Access_Key';
  # Connect to database
  $dblink =  sqlite_open($DB) or die ("Could connect to $DB");
  # Query database to find which ASINs to search on
  $sql = "SELECT asin FROM item ORDER BY asin";
  $resource_set = sqlite_query($dblink, $sql);
  print "<table>n";
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
    # Value for ASIN
    $asin=$row['asin'];
    # Build URL to query based on ASIN and ACCESS_KEY
    $url='http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
    $url.="&AWSAccessKeyId=$ACCESS_KEY";
    $url.="&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
    $url.='&ResponseGroup=Medium,OfferFull';
    # Place the results into an XML string
    $xml= file_get_contents($url);
    # Use Simple XML to put results into Simple XML object
    $simple_xml=simplexml_load_string($xml);
    $author=$simple_xml->Items->Item->ItemAttributes->Author;
    $ISBN=$simple_xml->Items->Item->ItemAttributes->ISBN;
    $publisher=$simple_xml->Items->Item->ItemAttributes->Publisher;
    $publication_date=$simple_xml->Items->Item->ItemAttributes->PublicationDate;
    $title=$simple_xml->Items->Item->ItemAttributes->Title;
    $num_pages=$simple_xml->Items->Item->ItemAttributes->NumberOfPages;
    $list_price=$simple_xml->Items->Item->ItemAttributes->ListPrice->FormattedPrice;
    $image=$simple_xml->Items->Item->MediumImage->URL;
    $sale_price=$simple_xml->Items->Item->OfferSummary->LowestNewPrice->FormattedPrice;
    $min_rank=get_rank($asin,'min');
    $max_rank=get_rank($asin,'max');
    # Format the output, you'd probably want a CSS sheet of some sort
    print "<tr><td rowspan=6><IMG SRC=$image></td><td>Author: $author</td></tr>n
           <tr><td>Title: $title</td></tr>n
           <tr><td>Publisher: $publisher</td><tr>n
           <tr><td>ISBN: $ISBN</td></tr>n
           <tr><td>List Price: $list_price</td></tr>n
           <tr><td>Sale Price: $sale_price</td></tr>n
           <tr><td>Page Count: $num_pages</td></tr>n
           <tr><td>Best Rank: $min_rank</td></tr>n
           <tr><td>Worst Rank: $max_rank</td></tr>n
           <tr><td>Publication Date: $publication_date</td></tr>n";
   # Database chart points
  $Graph =& Image_Graph::factory('graph', array(600, 400));
  $Font =& $Graph->addNew('ttf_font', 'Verdana');
  $Font->setSize(10);
  $Graph->setFont($Font);
  $Plotarea =& $Graph->addNew('plotarea');
  $Dataset =& Image_Graph::factory('dataset');
  # SQL to get data points
  $sql="select active_date, sales_rank from aws where asin='$asin' order by active_date";
  $resource_set = sqlite_query($dblink, $sql);
  $i=0;
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
      $Dataset->addPoint($i, $row['sales_rank']);
      $i++;
  }
  $AxisX =& $Plotarea->getAxis(IMAGE_GRAPH_AXIS_X);
  $AxisX->setTitle('Time');
  $AxisY =& $Plotarea->getAxis(IMAGE_GRAPH_AXIS_Y);
  $AxisY->setTitle('Sales Rank', 'vertical');
  $Plot =& $Plotarea->addNew('smooth_line', &$Dataset);
  $Graph->done(array('filename' => './output.png'));
  print "<IMG SRC='./output.png'>n";
  }
   print "</table>n";

function get_rank($asin, $type)
{
   global $dblink;
   $sql = "select $type(sales_rank) as rank from aws where asin='$asin'";
   $resource_set = sqlite_query($dblink, $sql);
   while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
   {
        return $row['rank'] ;
   }
}
?>

This PHP script retrieves information from the database and builds a graph based on the data collected. Here is a screen shot of the page in action.

This bit of code should look similar to the first listing, in that it makes database calls to the SQLite database and interacts with Amazon via AWS. After getting the previous rankings out of the database, and displaying the information gathered via AWS, which is stored in a SimpleXML object, the script makes a call to Image::Graph, which uses the data points retrieved from the database and makes a line graph with the rank as the Y-axis and date/time as the X-axis. The script outputs the graph in .png format and displays it via HTML.

The get_rank function returns the highest or lowest rank the item has had since the database has been active. The rank is displayed when the price, picture, author, and other information is displayed.

All this work still leaves much to do to create a fully usable application, but this is a good start. Remember that by using the AWS API you can get information about other types of products from Amazon, including information from Wish Lists, Wedding Registries, and ListMania data.

Category:

  • PHP

Exploiting Amazon Web Services via PHP and SQLite

Author: Michael Stahnke

A few weeks ago a friend asked me how my book, Pro OpenSSH, was selling on Amazon.com. I was tracking the sales by going to Amazon.com and viewing the book page to examine the sales rank. The only data displayed about history information was today’s Sales Rank and Yesterday’s Sales Rank, which isn’t all that helpful. I decided to use PHP, SQLite, and the Amazon Web Services API to gather more useful data.

I thought it would be fun to track the sales rank over a period of time, then display a graph of the sales rank over time on a Web page.

You can gather data from Amazon in a number of ways. wget and grep could probably get the job done, but it is not elegant, nor is it encouraged by Amazon. The best way to get information is to use Amazon’s application programming interface (API).

Amazon’s Web Services (AWS) API offers a way to connect to the Amazon data warehouse and retrieve data about an Amazon item. To use the AWS API, you need to register with Amazon at the Amazon Web Services page. After registering, and accepting an end user license agreement (EULA), you will be given two keys: one for general access and requests, and one for verification and signing of requests. The general access key allows you to connect to the Amazon Web Services databases. The APIs are well-documented on the Amazon Web Services site.

I used the Amazon E-Commerce Service for my project to track sales rank on book titles over time. This service provides the ability to query an item via its Amazon Standard Identification Number (ASIN), International Standard Book Number (ISBN), author, artists, product name, publisher, or title, and retrieve virtually all information shown on the Amazon Web page about that item.

I started with an extremely simple PHP5 script that created the URL string you need to use with the Amazon Web service using Representational State Transfer (REST).

The PHP script is designed to run from the command line and POSTS a URL string. While you’re debugging the script, you can copy the URL string and paste it into a Web browser to verify that the Web services interaction is working appropriately. The following script shows the basic setup of the PHP script to query AWS.

<?
# Build URL that will query AWS
   $ACCESS_KEY =  'Access_Key';
  $asin =  '1590594762';
  $url =  'http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
  $url.=  "&AWSAccessKeyId=$ACCESS_KEY";
  $url.=  "&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
  $url.=  '&ResponseGroup=Medium,OfferFull';
  print   "<br />" . $url . "<br />";
?>

The output from this script is a URL you can enter in your browser. The browser will return some text formatted via XML. The XML schema for this text isn’t too complicated, and if you wanted to use an XSLT stylesheet, you could format the XML into HTML and have your presentation layer completed. However, my goal was not just to get information stored in Amazon’s database, but to store it myself so I can track the data over time.

For this I needed a data container. I had a few options for a data container in which to store statistics from the Web service queries. A relational database made the most sense, and PHP supports several. SQLite, introduced in PHP5, seemed like a nice choice, because SQLite is simple to administer and use.

Before you begin using SQLite, take a look at your PHP information and ensure that SQLite is supported by your configuration. If it is not, you can either compile the support into PHP or download an applicable package to add support for the database. Alternatively, you could use MySQL, PostgreSQL, Oracle, or another database.

Database setup

The database schema for this Web application involves two tables: one to track the unique Amazon Item Numbers (ASIN) and the initial date they were added into the tracking system, and the other to hold the ASIN, Sales Rank from Amazon, and datestamp for when the Sales Rank was updated. The small size of the database is a design feature.

The display page that shows the graphs, sales rank, and pricing information from Amazon will be updated upon display. That means we can pull the data, such as cover images, list price, description, title, and everything else dynamically. Amazon stores that information, so we don’t need to. Additionally, if the data changes, such as when price changes during a sale, the display page will have the updated information.

This is the basic schema for my SQLite database:

create table aws (
asin varchar(30),
sales_rank bigint,
active_date date);

create table item (
asin varchar(30) primary key);

Next, I added the ASIN into the item table manually via SQL. Obviously, you could write a PHP page to administer this portion of database interaction as well. After the initial script is modified to parse the XML and store the data into the database, you could set up the PHP script to run as a cron job. The script to fill the database will query the database to see what Amazon item numbers the script should be gathering statistics for. This allows for tracking of multiple items without any code changes, and thus does not lock the script into any hard-coded ASIN, as I used in the initial URL-building PHP code.

#!/usr/bin/php -q
<?php

  $DB="/var/www/db/aws.db";
  # Ensure database exists
  if (! file_exists($DB))
  {
    echo "The database file $DB not found.n";
    exit(05);
  }
  # Ensure database file is writable.
  elseif(is_writable($DB) != 1)
  {
    echo "Processing cannot continue, the database $DB cannot be written to.n";
    exit(06);
  }
  # Assumes basic Schema setup for $DB
  /* To get an appropriate database setup, simply run the following two lines
     of code inside an SQLITE prompt.
       create table item ( asin varchar(30) primary key);
      create table track ( asin varchar(30), sales_rank big_int, query_date date);
    To have an initial setup, an ASIN number must be entered into the 'item' table.
    In this case I have chosen my ISBN number for my book, Pro OpenSSH.
    Don't forget that rowid is kept internally in sqlite.
  */
  # Amazon Web Services access Key
  /* Get Amazon Web Services access Key (free) from http://aws.amazon.com
     The following key is not a working key, but used as an example.
  */
  $ACCESS_KEY='Access_Key';
  # Connect to database
  $dblink =  sqlite_open($DB) or die ("Couldn't connect to $DB");
  # Query database to find which ASINs to search on
  $sql = "SELECT asin FROM item ORDER BY asin";

 $resource_set = sqlite_query($dblink, $sql);
  $dt = date('Y-n-d H:i');
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
    # Value for ASIN
    $asin=$row['asin'];
    # Build URL to query based on ASIN and ACCESS_KEY
    $url='http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
    $url.="&AWSAccessKeyId=$ACCESS_KEY";
    $url.="&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
    $url.='&ResponseGroup=Medium,OfferFull';
    # Place the results into an XML string
    $xml= file_get_contents($url);
    # Use Simple XML to put results into Simple XML object (requires PHP5)
    $simple_xml=simplexml_load_string($xml);
    # Retrieve Sales Rank
    $sales_rank=$simple_xml->Items->Item->SalesRank;
    # Place Sales rank in Database
    # Build SQL statement to insert values into database
    $sql2 = "INSERT INTO aws (sales_rank,active_date,asin) VALUES ('$sales_rank', '$dt', '$asin')";
    # Ensure Results are received
    $insert_results = sqlite_query($dblink, $sql2);
    {
 # Check results
    if($insert_results)
       echo "Database $DB updated.n";
    }
  }
    else
    {
       echo "Database $DB update failed.n";
       exit(07);
    }
?>

After retrieving results from AWS, the script inserts the sales rank parameter along with a date and which ASIN the information correlates to into the aws table. This table will provide the data points for displaying graphs and other presentation material about an Amazon item.

The script parses the XML shown after using a URL similar to the one seen in the first PHP listing. The XML is then loaded into a string using PHP’s file_get_contents function. From there the XML is loaded into a SimpleXML data structure that is a very thorough set of associative arrays that can reference any value contained inside XML tags. To see the whole listing you can use the var_dump or print_r functionality of PHP.

After finding the pertinent information to store — Sales Rank in this case — we use an insert statement to create a record inside the local database. If we get an error in almost any stage of execution, we exit and return a non-zero error code.

The final step is in presentation. As stated earlier, using XSLT to parse the XML is certainly an option, but for this exercise, I will just use native PHP functionality in conjunction with SimpleXML.

I wanted to graph the sales rank over time to show the status of my book sales. To do this, I used the Image::Graph PHP Extension and Application Repository (PEAR) module.

To install Image::Graph, follow normal PEAR installation procedures. The installation was fairly easy on Fedora and Ubuntu Linux systems. The next script is the display.php page, which accesses the database and displays the sales rank in graph form. The system could be modified easily to track price or albums from your favorite artist, or other items.

<?php
  #$Id$
  include 'Image/Graph.php';
  $DB="/var/www/db/aws.db";
  # Ensure database exists
  if (! file_exists($DB))
  {
    echo "The database file $DB not found.n";
    exit(05);
  }
  # Ensure database file is writable.
  elseif(is_writable($DB) != 1)
  {
    echo "Processing cannot continue, the database $DB cannot be written to.n";
    exit(06);
  }
  $ACCESS_KEY='Access_Key';
  # Connect to database
  $dblink =  sqlite_open($DB) or die ("Could connect to $DB");
  # Query database to find which ASINs to search on
  $sql = "SELECT asin FROM item ORDER BY asin";
  $resource_set = sqlite_query($dblink, $sql);
  print "<table>n";
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
    # Value for ASIN
    $asin=$row['asin'];
    # Build URL to query based on ASIN and ACCESS_KEY
    $url='http://webservices.amazon.com/onca/xml?Service=AWSECommerceService';
    $url.="&AWSAccessKeyId=$ACCESS_KEY";
    $url.="&Operation=ItemLookup&IdType=ASIN&ItemId=$asin";
    $url.='&ResponseGroup=Medium,OfferFull';
    # Place the results into an XML string
    $xml= file_get_contents($url);
    # Use Simple XML to put results into Simple XML object
    $simple_xml=simplexml_load_string($xml);
    $author=$simple_xml->Items->Item->ItemAttributes->Author;
    $ISBN=$simple_xml->Items->Item->ItemAttributes->ISBN;
    $publisher=$simple_xml->Items->Item->ItemAttributes->Publisher;
    $publication_date=$simple_xml->Items->Item->ItemAttributes->PublicationDate;
    $title=$simple_xml->Items->Item->ItemAttributes->Title;
    $num_pages=$simple_xml->Items->Item->ItemAttributes->NumberOfPages;
    $list_price=$simple_xml->Items->Item->ItemAttributes->ListPrice->FormattedPrice;
    $image=$simple_xml->Items->Item->MediumImage->URL;
    $sale_price=$simple_xml->Items->Item->OfferSummary->LowestNewPrice->FormattedPrice;
    $min_rank=get_rank($asin,'min');
    $max_rank=get_rank($asin,'max');
    # Format the output, you'd probably want a CSS sheet of some sort
    print "<tr><td rowspan=6><IMG SRC=$image></td><td>Author: $author</td></tr>n
           <tr><td>Title: $title</td></tr>n
           <tr><td>Publisher: $publisher</td><tr>n
           <tr><td>ISBN: $ISBN</td></tr>n
           <tr><td>List Price: $list_price</td></tr>n
           <tr><td>Sale Price: $sale_price</td></tr>n
           <tr><td>Page Count: $num_pages</td></tr>n
           <tr><td>Best Rank: $min_rank</td></tr>n
           <tr><td>Worst Rank: $max_rank</td></tr>n
           <tr><td>Publication Date: $publication_date</td></tr>n";
   # Database chart points
  $Graph =& Image_Graph::factory('graph', array(600, 400));
  $Font =& $Graph->addNew('ttf_font', 'Verdana');
  $Font->setSize(10);
  $Graph->setFont($Font);
  $Plotarea =& $Graph->addNew('plotarea');
  $Dataset =& Image_Graph::factory('dataset');
  # SQL to get data points
  $sql="select active_date, sales_rank from aws where asin='$asin' order by active_date";
  $resource_set = sqlite_query($dblink, $sql);
  $i=0;
  while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
  {
      $Dataset->addPoint($i, $row['sales_rank']);
      $i++;
  }
  $AxisX =& $Plotarea->getAxis(IMAGE_GRAPH_AXIS_X);
  $AxisX->setTitle('Time');
  $AxisY =& $Plotarea->getAxis(IMAGE_GRAPH_AXIS_Y);
  $AxisY->setTitle('Sales Rank', 'vertical');
  $Plot =& $Plotarea->addNew('smooth_line', &$Dataset);
  $Graph->done(array('filename' => './output.png'));
  print "<IMG SRC='./output.png'>n";
  }
   print "</table>n";

function get_rank($asin, $type)
{
   global $dblink;
   $sql = "select $type(sales_rank) as rank from aws where asin='$asin'";
   $resource_set = sqlite_query($dblink, $sql);
   while ($row = sqlite_fetch_array($resource_set, SQLITE_ASSOC))
   {
        return $row['rank'] ;
   }
}
?>

This PHP script retrieves information from the database and builds a graph based on the data collected. Here is a screen shot of the page in action.

This bit of code should look similar to the first listing, in that it makes database calls to the SQLite database and interacts with Amazon via AWS. After getting the previous rankings out of the database, and displaying the information gathered via AWS, which is stored in a SimpleXML object, the script makes a call to Image::Graph, which uses the data points retrieved from the database and makes a line graph with the rank as the Y-axis and date/time as the X-axis. The script outputs the graph in .png format and displays it via HTML.

The get_rank function returns the highest or lowest rank the item has had since the database has been active. The rank is displayed when the price, picture, author, and other information is displayed.

All this work still leaves much to do to create a fully usable application, but this is a good start. Remember that by using the AWS API you can get information about other types of products from Amazon, including information from Wish Lists, Wedding Registries, and ListMania data.

Category:

  • PHP

Corel Linux

Author: Benjamin D. Thomas

Experience Linux® performance built specifically for the
desktop with Corel® LINUX® OS. Based on Debian, this powerful system delivers an incredibly easy-to-use, four-step graphical installer that
automatically detects most PCI hardware. Featuring a KDE-based, drag-and-drop desktop environment and an innovative browser-style file manager, Corel
LINUX OS is an exciting development.

Highlights
& Features


A Powerful, Easy-to-Use Operating SystemDesigned Specifically for Your Desktop
Corels enhancements to Debian GNU/Linux® and KDE deliver a graphical desktop
environment that lets you get up and running fast. Featuring a simple four-step
installation program, a full-featured file manager, centralized configuration and system
updates, and an e-mail client and Web browser, Corel® LINUX® OS combines powerful
performance with intelligent simplicity.

Easy Installation with Corel® Install
Express

  • Lets you install Corel LINUX OS in four easy steps
  • Automatically detects most PCI hardware
  • Provides integrated partitioning to allow a
    dual-boot system
  • Includes custom install option
  • Offers a comprehensive Help system

Click to enlarge

Easy File Management with Corel® File Manager

  • Features a friendly, graphical drag-and-drop
    design
  • Lets you browse local and Windows® network drives
  • Allows Web browsing and FTP Internet file
    transfers

Click to enlarge

Graphical Control Center / Easy-to-Use
Graphical Desktop

  • Enjoy a friendly, uncomplicated work environment
    with enhanced KDE desktop
  • Set your IP address, gateway, DNS server and
    domain server quickly and easily in the graphical Control Center
  • Easily install printers with an enhanced graphical
    interface

Click to enlarge

Click to enlarge

Easy System Updates

  • Easily update your system over the Web*
  • Install new applications with just a few clicks

Click to enlarge

Outstanding Compatibility

  • Easily share Windows files
  • Compatible with other Linux software
  • Seamless integration with Windows and UNIX®/Linux
    networking environments

*Internet connection required.

What’s Included

The following is a breakdown of the components included in Corel® LINUX® OS Download,
Standard, and Deluxe:

Components Corel LINUX OS Download Corel LINUX OS Standard Corel LINUX OS Deluxe
Corel LINUX OS based on
Debian 2.2 Kernel
Enhanced KDE Desktop
Corel® Install Express
Corel® File Manager
User Guide
Installation Technical Support
included (30 days)
E-mail E-mail
and telephone
Netscape® Communicator
Adobe® Acrobat® Reader
Instant Messenger ICQ
compatible client
Bitstream® and Type 1 fonts 20 200
Corel® WordPerfect® 8 for Linux® Light version Full version
Corel WordPerfect 8 for Linux User
Manual
eFax Plus service three
months free*
Enhanced sound drivers (OSS)
CIVILIZATION®: Call to Power
strategy game Limited Edition**
BRU Backup Software
3.5″ Linux Penguin Mascot

Also Included…

  • Apache, mail, news, IRC
  • C/C++ and other programming languages
  • TCP/IP, NFS, UUCP, PPP, SMB
  • Perl, Tcl/Tk, awk, sed and other UNIX® tools

*Internet connection required.
**Full version. Network play and Editor/Cheater disabled.

System Requirements

  • Pentium® or Pentium-compatible processor
  • 24 MB RAM (64 MB RAM recommended)
  • 500 MB of hard disk space
  • CD-ROM drive, 2 MB VGA PCI card and mouse
  • Supports most hardware designed for Pentium
    computers. Click here
    to find out more about hardware compatibility for Corel LINUX OS.

For more information on Corel®
Linux
®, visit:http://linux.corel.com

CoReL LiNuX 4eVeR – D.Toeg

    Take console productivity to a new level with Screen

    Author: JT Smith

    Screen is an application that’s often underestimated. Screen is, simply put, a screen manager with VT100/ANSI terminal emulation. Think of it as a full screen, text-based window manager for your terminal or console. For what it is, it’s an incredibly feature-rich application. In this article, I will explain what it does and why it’s so useful.
    According to the man page, Screen “multiplexes a physical terminal between several processes (typically interactive shells).” What this means to the rest of us is that you start up Screen and run something in it (typically with a shell), and do whatever you normally would in a terminal. This is all fine and dandy but in and of itself isn’t anything new.

    Perhaps the most popular feature of Screen is something called detaching. Let’s start with an example. It’s 4 p.m. and you need to start compiling an application that takes around three hours to compile. After it compiles, you need to reboot the system. All this needs to be done by tomorrow morning. Typically that would mean spending the evening at the office, but not with Screen. With Screen you can start it up, start compiling the application, and then detach from the screen session altogether. Detaching will take you back to the command line from which you originally ran screen. At that point you can actually log out of the system and go home for dinner.

    Before you panic, let me remind you that you did not suspend the compile job or forfeit your rights to the processor. It’s still compiling. You just don’t have to keep that specific terminal open in order to keep tabs on it. After you’ve enjoyed a nice dinner at home, you open up your laptop and SSH (Secure Shell) into the system, re-attach your Screen session and see what happened with the compilation. You’ll be at the same command prompt where you started the compilation, complete with a buffer of what happened. You then simply reboot the system remotely and you’re ready for the meeting in the morning — and it didn’t take all evening.

    To start Screen, type: screen. That starts Screen and runs an instance of your interactive shell (usually Bash). You can do whatever you want in there, just as if it was just another one of your xterms. By default, each command to Screen begins with a C-a (Control-a), and is followed by at least one other keystroke. For example, when you’re ready to detach the session you hit C-a, followed by d. When you decide to re-attach the session, you will type something like screen -r with optional arguments including the sessionowner, pid, tty, and/or host.

    Now that you’re convinced that Screen is useful, let’s talk about being productive with it. The Screen feature that as been the most useful to me is the windowing capability. Screen allows you to have multiple windows in one session. You can have one window that’s compiling something on one machine, another that’s SSHed into another server editing a configuration file, and yet another monitoring a log server. Already do this with just a group of xterms on your screen? Let’s go through an example where Screen really shines.

    My name is Joe LinuxAdmin and I administer a cluster of six Linux servers doing various tasks, ranging from Web serving to file and printer sharing. What I need is a portable system administration environment. I’ve decided to use Screen. I start my session by typing screen. I now have something that I can remotely detach and re-attach, so that’s good. I’d like to have an open shell into each of the servers I administer, so I’ll create a window for each one and log in to each server. (To create a new window, I type C-a followed by n.) I can do this as many times as I want, so it’s easy to create the six windows I’ll need to access all of my servers at once.

    After I’ve created my windows, I need to navigate them. To go to window number four, I’ll type C-a 4. Instantly I’m viewing window number four. If I want to switch to the next window, I type C-a n. Now that I’ve got a server with the hostname of zeus on window 5, how am I going to remember that zeus is on 5? I can always hit C-a A and type in zeus. After I name my windows, I don’t need to cycle through them. I can just as easily type C-a ‘, then type in the name of the machine I want — and there it is. If I’m especially conservative with my keystrokes, then C-a “ will show me a list of my windows along with their titles.

    I now have, in one terminal, windows open to all of the servers I administer. Everything is right at my fingertips. I’ve got it all set up. It’s now 5 p.m. and time to go home. Around 8 p.m. I’m out for a walk with my date. Lo and behold, my pager goes off. As usual I look for the nearest computer to address the situation and end up in a cybercafe on a Windows machine. Having a free SSH client on the windows machine is easy thanks to PuTTY, but it’s going to take me forever to load up all of those terminals and login to all of the machines. It’s too bad I left my Screen session attached at the office.

    But wait! This is where Screen excels. I log in to my machine and type screen -dr. It remotely detaches my session and re-attaches it where I am. I now have my Screen session with all of my windows, all of my servers, and my entire administration environment right in the cafe. None of it is dependent on the client I’m using. I could be on a Linux machine, a Windows machine, or a Mac OS X machine.

    Having windows for different servers you’re logged into is just one example of the uses for multiple windows within Screen. It works equally well for a development environment. You can have one window for a text editor, another for compiling, another for debugging, and yet another running your application. It’s a mobile environment that can be accessed from anywhere.

    Screen has many powerful features that go beyond attaching and detaching. For example, you can have more than one person at a time attached to a screen session. Real-time editing by two people on one file — how’s that for collaboration? You can also password-protected Screen sessions to add a level of security. In addition, you can have more than one Screen session per user. The possibilities are endless.

    Next time you’re looking for a level of flexibility at the command line that you feel just isn’t possible, I recommend reading up on Screen. It’s a powerful utility that will enhance your productivity — and at the same time it will make your life easier.

    OpenSSH with Public Key Cryptography Tutorial

    Author: a_thing

    libervisco writes “OpenSSH, an OpenBSD project, is an incredibly secure implementation of the SSH protocol, a way of logging into a remote machine. For users of outdated protocols such as RSH, rlogin, and Telnet, it’s an updated, secure replacement. For those who have never used anything like it, SSH can become a very valuable tool.”

    Selenium project using Ruby on Rails and Ajax

    Anonymous Reader writes “Selenium is a useful and important addition to the toolbox of software engineers, designers, and testers. Together with a continuous integration tool, it allows teams to automate acceptance tests and build better software as they find bugs easier, earlier, and more often. This article provides an example of how to apply Selenium in a real-world project using Ruby on Rails and Ajax.”

    Link: ibm.com/developerworks

    Category:

    • News

    librivox – distributed public domain audio lit

    Hugh McGuire writes “I thought you might be interested in the LibriVox project, a distributed open source audio literature project, started in August of this year, and moving along pretty well:

    http://librivox.org/
    LibriVox volunteers record chapters of books in the public domain, and we release the audio files (catalog and podcast) back into the public domain. Our objective is to make all books in the public domain available, for free, in audio format on the internet. We are a totally volunteer, open source, free content, public domain project.

    Brewster Kahle invited us to attend the http://www.openlibrary.org/ launch, where we produced a recorded version of one of the openlibrary books … you can hear what happened when brewster demonstrated here:
    http://librivox.org/index.php?p=58

    We’ve got 100+ volunteers at the moment, 10 books completed, and expect 25 books by end 2005, and target a minimum 100 books by end of 2006. We’re always looking for new volunteers, both to read and to help with the various technical projects to keep this growing project running smoothly.

    thanks,

    Hugh McGuire.
    http://librivox.org/

    Link: librivox.org

    Lawyers in love with open source

    Author: Marco Fioretti

    Most of the time, open source supporters think of lawyers as a crowd of hungry vultures, throwing patents and cease-and-desist letters at innocent hackers. However, in the province of Foggia, Southern Italy, two small groups of lawyers have turned themselves into open source evangelists.

    What? Did hell just freeze over? No, it’s just common sense. The long-term availability and privacy of all legal documents deserve the highest possible guarantees. Only non-proprietary file formats like OpenDocument, the default format in OpenOffice.org 2.0, will always be legally accessible with any software program. Proprietary software, if loaded with DRM functionality, may silently track file modification and exchanges and automatically report it to third parties. So much for attorney/client privilege.

    For several months, two Gruppo di Lavoro – Open Source (GL-OS) — that is, Open Source Workgroups — one right in Foggia and another in the nearby town of
    Lucera, have been promoting the free sharing of IT knowledge among lawyers of the province, the philosophy of free and open source software (FOSS), and the diffusion of GNU/Linux systems. Practically speaking, they organize meetings and classes and distribute free software and related documentation. Their base is the Lawyers’ Hall in the Palazzo di Giustizia (Tribunal) of Foggia, where a couple of computers were set up to showcase the potential of FOSS. Free support, especially for OpenOffice.org, is always available, and newbies can ask for a personal tutor. Last May the groups held a well-attended workshop on these issues. Satisfied participants received a CD-ROM with FOSS programs for a Windows desktop.

    The GL-OS lawyers told me that many of their Microsoft-only colleagues would like the greater security and transparency guaranteed by open source. But as in other parts of the world it turns out to be hard to get lawyers to use FOSS. A lot of legal software and forms can be used only in a Microsoft environment. A project member said, “The discovery of open source makes you realize that the first great obstacle is the information technology subculture that is inoculated into people.” In other words, non-technical obstacles remain the hardest to overcome.

    GL-OS volunteers understand that lawyers, like almost everybody else in the world, see computers just as fancy typewriters: tools that have to solve job-related problems, not create new ones. In a law office, only 2-3% of what Linux can do is actually needed — or, quoting again GL-OS lawyers, “You don’t need a Ferrari because you must move at only 5 Km/H anyway.” Consequently, GL-OS makes a point to work in a gradual and as painless as possible way. The computers in the Lawyers’ Hall can boot either Windows XP or Mandrake 10.1. The Windows partition hosts only FOSS programs that a lawyer really needs and can use without problems: OpenOffice.org, Firefox, and Thunderbird. The first IT help many GL-OS visitors need is learning how folders and file managers can help to keep files organized, and that documents can be protected with cryptography. Only later does GL-OS introduce the FOSS philosophy.

    Future plans

    GL-OS is preparing to release Italian legal forms in OpenDocument format, and plans to offer custom CD-ROMs. The group is studying how to become official OOo Community Distributors. The to-do list also includes training classes (especially for OpenOffice.org) and conferences about FOSS and the forensic world. The longest-term goal is the utilization of FOSS to access both legal databases and the Processo Civile Telematico, the Italian project that is attempting to reduce as much as possible the amount of paper circulating in any given trial. Once it is implemented, all requests to the court and other documents will be written and directly filed in encrypted XML format, signed with smart cards.

    When I asked what support GL-OS needs most urgently from the FOSS community, the answer came fast: please give us more simple manuals to install and configure applications! GL-OS would also like to hear from other lawyers and FOSS programmers to cooperate and exchange experiences.

    Similar projects in Italy

    GL-OS wasn’t my first encounter with pro-open source Italian lawyers. During the Linux
    World Expo 2004 in Milan
    , Stefano Sutti, managing partner of the law firm Studio Legale Sutti explained how his company has been using open source software for years.

    In addition, the Linux-Lex portal provides lots of information for lawyers interested in migrating to Linux. The Studio Legale Sutti commissioned and subsequently released under the GPL license its Web-based law office management application, called Knomos. Another project in the same space is eLawOffice, which has recently set up an online community to get feedback from end users. The basic functions of eLawOffice are usable, and most of the code could be reused by lawyers of other nations. The project’s Links page points to more resources than I could list here.

    Given the cross-border nature of the legal hassles surrounding FOSS today, building a network of lawyers interested in promoting it would be great, wouldn’t it? Within the E.U., hackers and lawyers could cooperate to make sure FOSS plays the greatest possible role in a digitally interoperable Europe, and these (for now) isolated groups worldwide could explore how to work together through the Software Freedom Law Center.

    Category:

    • Legal

    Eclipse method for pairwise testing

    Anonymous Reader writes “This Eclipse based technology is for generation and manipulation of test input data or configurations. It uses sophisticated combinatorial algorithms to construct test suites with given coverage properties over large parameter spaces. The use of combinatorial covering configurations, also known as pairwise testing, is a well-known technique for covering large input spaces. Here is a tutorial that explains how to use the Eclipse environment.”

    Link: alphaworks.ibm.com

    Category:

    • Java