top of page
  • Writer's picturekyle Hailey

Webcast June 19th: Jonathan Lewis – Expert Look at Delphix


Live webcast: Jonathan Lewis – An Oracle Expert’s Look at the Delphix Technology Date: Wednesday, June 19 @ 9am PT Click Here to Register

Jonathan Lewis joins us for the 2nd webcast of 3 in his series on Delphix technology and his experiences. Jonathan came out to the Delphix offices in California in March and kicked the tires on the product for a few days and had a chance to talk to some of the creators of Delphix, ZFS, DTrace and Active Dataguard at the Delphix offices.

The first webcast was an informal discussion of his experiences. This second webcast will also be informal but more technical.

Please send us questions by commenting on this blog post and we will try to incorporate the questions into the webcast

Some of the areas we may talk about are

Introduction

  1. motivation for virtualizing databases covering some use cases that database virtualization solves.

Ease of Use of Delphix

  1. link to a source database

  2. provision a virtual database clone

  3. eadministration automates of change collection and purging of old data.

Delphix on a laptop

  1. Jonathan and I have set up Delphix on our laptops and we will discuss a bit on how this has worked for us.

  2. Demo of Delphix (either on laptop or on a “real” machine”)

Technical explanations

ZFS

  1. ZFS “Keeps a copy of every new version of a block”

  2. ZFS equivalent of the read-consistent index  that gives you the file as at any point in time

  3. Awareness of Oracle block size – setting logical ZFS block size to Oracle block size

  4. Being able to compress the ZFS logical block to a smaller number of sectors

  5. Special case of empty blocks compressing to fit the ZFS meta-data entry

Source database linking

  1. RMAN full backup run  by delphix

  2. Delphix uses RMAN tape library APIs

  3. RMAN backup from SCN

  4. Allows consistent versions of database for EVERY incremental backup taken, forever

  5. Enable ctwr  (block change tracking) to minimise backup times

  6. Ability to copy all redo logs, and keep up with online redo logs  so able to start from any level 1 and roll forward to any point before next level 1

  7. How old versions of data blocks from the backup history can be dropped when there are no snapshots that depend on them so that the backup keeps rolling forward in time without needing a whole new starting snapshot to be taken

  8. The complexity of making this task efficient

  9. The importance and benefit of being able to do it.

Virtual Database

  1. Creating a vDB and how the new data is created without a history trail and unchanged datablocks come from a snapshot and are mostly the original backups.

Related blog posts by Jonathan’s blog

1 view0 comments

コメント


bottom of page