S3 Bucket Archives

[obsolete]

Archiving Logs in S3

Loggly stores your logs in a large scale search engine hosted on the Internet. The amount of time we store your logs in our search engine index is called the 'index retention time' which you can set from your pricing tab under your account. Once events in an account reach an age that is older than the max index retention time for your account, the events are removed from the index.

Loggly provides a way to archive logs older than your account's index retention time by writing them to your own S3 bucket. We'll create folders that are named after your Loggly input ID. Logs in your bucket are kept forever, or until you remove them, so you'll always have a copy handy if you need them.

Configuring Log Archiving on S3

You can create an Amazon S3 bucket, authorize us to write to it, and give us the bucket name, and then we'll write your logs into that bucket from then on.

To set up a bucket for writing, head on over to the Amazon S3 dashboard. If necessary, make yourself a new bucket by using the create bucket button.  Ensure your bucketname is valid and follows the S3 naming rules. Example: "new-s3-loggly-bucket". There will be a logging option on the create button modal, but you can just ignore that as it doesn't have anything to do with Loggly.

Once you have the bucket created:

  • Select the bucket and click the 'properties' tab. You'll get a nice little window at the bottom with a list of permissions for the bucket.
  • Click the “Add More Permissions” button.
  • For the grantee, enter 'aws@loggly.com'.
  • Check the box for 'List', 'Upload/Delete' and 'View Permissions'.  
  • Click 'Save' in the lower-right corner.

Back over on Loggly, go to your account page (yoursubdomain.loggly.com/account/archiving/) and enter the name of your bucket in the form. Click submit, we'll make sure we can write to your bucket and will start flinging logs into your S3 bucket as we get them.

NOTE: Only account owners have permission to set up archiving.

Events are written to your S3 bucket in a .part format.  The .part files are temporary files that Loggly needs in order to properly merge your data.  Please don't edit or open these .part files.  The .part files will be processed approximately every two hours into the format specified on your archiving page (more on this just below).  This process is similar to .rar files being combined when ALL rar files are present.

There are three options for the format you would like your logs stored as:

  • Raw
    • Stores your logs as we received them
  • JSON
    • Your logs will be stored as json with the IP address, timestamp, and input name. The log events are stored as JSON escaped text in the event field.
  • CSV
    • Your logs will be stored as CSV with the IP address, timestamp, and input name. The log events are stored as CSV escaped text in the event field.

Note: It may take upwards of an hour before you start seeing logs in your bucket.

Using an S3 Client

There are several clients available for browsing your S3 buckets. If you are using OSX, check out S3Hub. At $2.99 on the App Store, it's a heck of a deal.

Another great tool that we use at Loggly is the S3cmd CLI.  If you're using Ubuntu it's normally in the default repositories (sudo apt-get install s3cmd).  S3cmd can also be found on github!

Top