Wednesday, December 4, 2013

ActiveState make ActivePerl non-free

I have just received an email from ActiveState which contains the following:

To be better equipped to answer support questions, particularly for customers using ActivePerl in production, we have changed the licensing terms of Community Edition: http://www.activestate.com/activeperl/license-agreement. To use ActivePerl in production, you must now have a supported Business or Enterprise Edition license; we continue to provide the latest versions free for development and testing use.

So now we need a decent free-as-in-beer and free-as-in-free Perl distribution for Windows. And don't tell me Strawberry Perl - last time I looked it didn't work well, and anyway most people don't want to have to install whole development environment just to get Perl.

I guess ActiveState have the right to do this, strictly speaking, but it's a dick move nevertheless. To say I'm annoyed would be understating it considerably. It's a classic bait and switch. Build up a community's dependence on what you give them, and then start charging them for it.

For the time being I will be advising clients on Windows to steer clear of PLPerl for Postgres. I'm not sure what I'll recommend they use instead. It certainly makes it more imperative to make sure that we can use PLV8 on Windows.

Thursday, November 14, 2013

A big news day

Of course the biggest news of the day is that Amazon has announced immediate support for Postgres on their RDS service.

I have taken this for a very short spin, and it looks like it works as I expected.

I noticed a couple of things. First, it doesn't require SSL to connect. That's probably not a good thing. They should have at least an option to require use of SSL. And second, they don't seem to have support for the awesome pg_stat_statements module, which is such an important tool in the modern Postgres DBA's arsenal. I hope I'm wrong or that they fix that one quickly.

The other interesting news I heard for the first time today from my colleague Jeff Frost, is that Ubuntu have released a new kernel version for the 12.04 LTS release that doesn't have the performance horrors of earlier kernels. That should be good news for lots of Ubuntu users. I have one client who downgraded to a non-LTS release on the basis of the performance regression, so this is a big deal.

Tuesday, October 22, 2013

Looking forward to Triangle PUG kickoff

The Triangle PUG has got 31 members since starting less than a month ago, and 20 have said they will attend the kickoff meeting tomorrow (Wednesday) evening. I'm quite excited!

Monday, October 14, 2013

Triangle PUG kickoff meeting, 7.00 pm Oct 23rd

After a little time spent trying to get a good venue, I've scheduled the kickoff meeting for the Triangle PUG for Oct 23rd at 7.00 pm. Robert Treat will be in town and attending, and I'm going to reprise my PostgresOpen talk on PostgreSQL 9.3 and JSON. Details are at meetup.com.

Tuesday, October 1, 2013

NC Triangle PUG moving along

We're up to about 19 members with very little effort, so I think this will be viable. We have a group on meetup, and I hope to be able the schedule the first meeting, which will be a week or two from now, very soon.

Extra functions needed

I'm trying to collect a list of things that people need for handling JSON, particularly in generating JSON.

For example, Josh gave me this little problem yesterday:

given: 
serial | name | type 
847001 | t15 | GE1043 
847002 | t16 | GE1043 
847003 | sub-alpha | GESS90 

I want to produce: 
  { "turbines" : 
    { 
       847001 : { "name" : "t15", "type" : "GE1043" }, 
       847002 : { "name" : "t16", "type" : "GE1043" }, 
       847003 : { "name" : "sub-alpha", "type" : "GESS90" } 
    } 
  }

I think this is going to call for a small addition to the json_build package, a json object aggregator. With that we'd be able to do this with:

select json_build_object("turbines",
        json_object_agg(serial, json_build_object("name",name,"type",type)))
from mytable;

Probably we'd want a similar aggregator for arrays. This would be different from the builtin json_agg function which takes a record as its argument and always produces an array of hashes.

Saturday, September 28, 2013

Features still wanted

I was uploading my recent slide decks to SlideShare and noticed I had the deck there from a talk I gave about 4 years ago at TriLug about new features in Postgres 8.4. In there is a slide (#24) labeled "Major missing SQL features" and containing these two items:
  • grouping sets
  • merge
Sadly, four years later those features are still missing. And they are still the thing I think are the most urgent new SQL features we should provide.  Both are hard to do well, and probably both require major funding kicks to have any chance of being done.

Friday, September 20, 2013

Interesting new technologies

A couple of things I heard about at Postgres Open that deserve closer inspection are Docker, a nice container toolkit, leveraging among other things Linux LXC, and OpenShift, another containerization platform. Today I discovered that while I was travelling home yesterday (and taking far too long about it), a major announcement was made of collaboration between Docker and RedHat, the principal sponsors of OpenShift. This is pretty cool stuff. OpenShift already has "cartridge" for PostgreSQL, and I am going to play with it the first opportunity I get.

NC Triangle PUG here we go

Quite by chance I and two other people from the NC Triangle area (Raleigh, Durham, Chapel Hill and surrounds) went to the 2ndQuadrant sponsored dinner (thanks guys) at Postgres Open on Monday night. Out of that came a proposal for a Triangle Postgres Users Group. I set a few things in motion on that today, and I hope it will be a reality very soon. Please email me if you are interested and live in the area, so I can make sure we don't miss you when everything gets set up.

Slides from Postgres Open - 9.3 JSON and Blackhole FDW

Here are the slides from my talk at Postgres Open about JSON in Postgres 9.3

I also reprised my pgCon lightning talk about the Blackhole Foreign Data Wrapper.

Wednesday, September 4, 2013

Yet another reason to hate subversion ...

Yesterday, a family member who is new to subversion fat-fingered a commit and instead of committing just one file committed a whole large directory. It's an easy enough thing to do. My initial reaction was "OK, it shouldn't be too hard to roll that back." After all, I use git reset on a fairly regular basis.

Turns out this is actually quite hard to do. Here is one description of how to do it. Yuck.

Thursday, July 25, 2013

Better info in Building with Mingw/Msys on Windows

I've improved the information on setting up a PostgreSQL development environment on Windows. It now covers building from git as well as from tarballs, and a bunch of other changes.

What does a DBA do?

Someone was asking on IRC about what a DBA does, and I found this link. It's fairly Oracle-centric, understandably since it's from an Oracle FAQ, but still quite a good description of all the things a DBA has to do.

Friday, July 19, 2013

More info on the Buildfarm dashboard page

I'm fairly conservative about adding things to the Buildfarm dashboard page. Probably screens are mostly wider than they used to be back when the buildfarm was written, but I try to make sure there won't be any line wrap.

But today I did add something - each line now shows the git HEAD ref of the build in question. That makes it easier to see which builds contain a certain commit. This was at the request of Stephen Frost. I think we can afford the extra line width.

The changes to the web app were tiny. The database required a bit more - the dashboard is driven by a bespoke manually refreshed materialized view, and that table and the view used to refresh it had to have the extra column added.

I hope this proves useful.

Monday, July 1, 2013

SAS supports PostgreSQL

From a recent email:
Another good thing for SAS, is they FINALLY support postgres. Next month the next major version of SAS (9.4) is being released which has postgres support. Their midtier infrastructure service actually uses postgres for backend db stuff, which was very surprising to me that SAS is using it to support its own stuff now.

You can actually demo VA here:
http://www.sas.com/software/visual-analytics/overview.html

I have very fond memories of SAS - SQL and SAS were the first two computer languages I learned, back in the 1980s, and my knowledge of SAS largely helped me pay for my way through University when I returned to study in 1988. Sadly, I've hardly touched it in the last 20 years, even though they are headquartered here  a couple of miles from my house. They are consistently said to be one of the best places in the world to work. So it's nice to see them supporting PostgreSQL.

Thursday, June 20, 2013

A literate client comments on Postgres

Some clients are more fun than others. Today, after I told one client of a long standing feature of Postgres that he was unaware of, he channelled his inner bard and sent me this email:

Steve:
How beauteous this software is!
O brave new database
That has such commands in't!

Andrew:
'Tis new to thee. 
 (To save you looking it up, he's adapting a well known passage in The Tempest).

Things like this make my day.

Friday, June 14, 2013

Release 4.11 of PostgreSQL Buildfarm client

Version 4.11 of the PostgreSQL Buildfarm client has been released. It can be downloaded from http://www.pgbuildfarm.org/downloads/releases/build-farm-4_11.tgz

Changes since 4.10:

  • Turn down module cleanup verbosity
  • Add check for rogue postmasters.
  • Add pseudo-branch targets HEAD_PLUS_LATEST and HEAD_PLUS_LATEST2.
  • Use Digest::SHA instead of Digest::SHA1.
  • Make directory handling more robust in git code.
  • Move web transaction into a module procedure.
  • Switch to using the porcelain format of git status.
  • Provide parameter for core file patterns.
  • Use a command file for gdb instead of the -ex option

The web transaction and Digest::SHA changes have allowed the removal of a couple of long-standing uglinesses on the system. In almost all cases, the config parameter "aux_path" and the separate run_web_transaction.pl script are now redundant (the exception is older Msys systems).


Enjoy

Tuesday, June 4, 2013

This is what beta is for

Today on IRC someone who has been testing out the new JSON stuff complained about a case I hadn't catered for, namely someone representing a Unicode character outside the Basic Multilingual Plane as a surrogate pair of characters constructed with the '\u' JSON escape. This is explicitly allowed in the JSON spec, so it needs to be handled. It's going to be a bit ugly to fix this, I suspect, but I'm glad to find out about it now rather than later. Thank goodness for beta testers.

Saturday, May 25, 2013

Redis FDW Singleton Key tables

I recently mentioned the possibility of tables being drawn from a single Redis object, so you would get one row per key/value pair in the named hash, or one row per element in the named scalar, set, list or zset. This has now been committed, for use with Release 9.2 and later. There are examples in the regression test files. This is going to be particularly important when we get to writable tables, which is the next order of business.

For those of you who were in my talk yesterday, the breakage I mentioned has now been fixed.

Thursday, May 23, 2013

Blackhole FDW

My Blackhole FDW talk seemed to go well. The line about no unintentional data loss got the best laugh. Here are the slides.

Besides being a bit of fun, this did have a serious purpose - creating a skeleton for building an FDW, including the writable API. The code has the contents of the docs on writing an FDW as comments in the appropriate functions, to help a new FDW writer.

The code is on bitbucket.

Buildfarm download location

It was just pointed out to me that the download link on the buidfarm server front page wasn't updated when I fixed the other links after switching to publishing them on the buildfarm server itself. That's been fixed now. The only valid link for downloading the client is http://www.pgbuildfarm.org/downloads/. Sorry for any confusion.

Developer meeting went well

There seems to be a consensus, which I share, that the annual PostgreSQL Developers Meeting went much better this year that in the previous couple of years.

One item of note: the commit fest managers are going to be much more vigilant about making sure that if you have signed up for a review you will work on it right away, and about removing reviewers who are not producing reviews. So you will be able to have much more confidence that if someone is signed up as a reviewer for a patch they will actually be doing the work.

After the meeting and the obligatory visit to the Royal Oak, a number of us went out and had a pleasant Indian meal, and then I came back to the hotel, fixed a customer problem,  and wrote up some slides for my proposed lightning talk. More on this later.

Now, on to the conference proper!

Thursday, May 16, 2013

Our commitment to quality is second to none.

If anyone doubts the total commitment of the Postgres project to quality and correctness, let them be reassured by this completely correct but decidedly pedantic commit. I dips me lid to Tom Lane and Thom Brown.

Wednesday, May 15, 2013

Time down the drain

Annoyingly large amount of non-productive time today. First I went and renovated the Makefiles for all my Postgres extensions, having originally copied a piece of bad logic from somewhere into almost all of them. This was exposed by a change in the HEAD branch of Postgres that's about to be reverted, but it seemed best to remove the bogus logic anyway.

Then, for no apparent reason, Xfce forgot to load xfwm4 on one of my machines, and suddenly I had lost all my window decorations - no buttons to close or minimize windows, no icons in the window button bar, etc. That took me an hour or more to diagnose and fix.

All this is delaying my slide preparation for pgcon :-(

PostgreSQL tables for single Redis objects

After playing a bit with the Redis FDW and the Redis command extension, Josh Berkus was slightly dissatisfied. He was looking for a way to map the possibly huge set of values in a single Redis object to a set of rows on a PostgresSQL table, and the Redis FDW currently maps each object to a single row - as an array if it's appropriate. Of course, we could call unnest () on the array, but it seems roundabout to construct an array only to have to unpack it immediately. These aren't terribly cheap operations.

I've been thinking a bit about his complaint, and I think this will be doable. What we'll need is a new table option that specifies the key and designates the table as one sourced from a single key rather than a range of keys. Say we call this option singleton_key. Then we might do something like:
CREATE FOREIGN TABLE hugeset (value text)
SERVER localredis
OPTIONS (tabletype 'list', singleton_key 'myhugelist');
This option would be incompatible with the tablekeyset and tablekeyprefix options. If given, the key won't be looked up at all. We would simply use the given key and return the corresponding list of values. That would make selecting from such a table faster - possibly lots faster. For scalars, sets and lists, the table would have one column. In the case of a scalar there would only be one row. For zsets, it would have two columns - one for the value and one for the score. Finally, for hashes it would have two, one for the property and one for the value.

This looks like a useful possible enhancement.

Tuesday, May 14, 2013

Redis FDW gets a slightly new home

To avoid a few administrative difficulties imposed by github, Dave Page and I have moved the repo for the Redis FDW to https://github.com/pg-redis-fdw/redis_fdw. If you have a clone of the old repo, it should be sufficient to change the remote setting in the .git/config file, replacing "dpage" with "pg-redis-fdw". The commits are identical up to the point where we removed everything from the old repo.

Sorry for any inconvenience.

Sometimes you just get hit by a huge application load

I love emergency calls. They give me a chance to get into something new, very quickly, and get them fixed. A few days ago a company called SpaceInch made a new app called Say The Same Thing, which became an instant hit, possibly due to the quite awesome video by OK Go, which you should watch even if you don't look at anything else in this post.

Since the launch they have been hit by a huge flood of users and last night things started to drag, and they asked PostgreSQL Experts for some emergency help. The app is backed by a PostgreSQL database running on Heroku.  During a phone call of two hours, I was able to get in, examine the database, and devise some steps that we hope will have the effect of making things run smoother.

The solution is going to involve some partitioning of one or two tables. This isn't being done so much for the benefit of constraint exclusion as because it lets them drop large amounts of data that is no longer needed, cheaply. Previously they had been cleaning one of these tables by deleting rows, but abandoned that as it adversely affected performance.  But the consequence of that was that the table just grew and grew to several million rows in a few days. Dropping child tables that contain data that is of no further interest will work well to tame the data size without all the overhead of row deletion.

Troubles sometimes don't come singly. I actually had to deal with two emergencies last night, which is quite unusual - the other was also for a client running on Amazon. For good or ill, we are seeing more and more use of Postgres instances running in the cloud - whether managed by the client or on a managed service such as Heroku.

Friday, May 10, 2013

Only GROUP BY what you really need to

The old rule used to be that if you have a query that contained aggregated columns, you have to GROUP BY every other column selected. These days you are allowed to omit columns that are provably functionally dependent on one or more of the other grouped by columns. In practice, that means you can omit any columns that are not in the table's primary key if all the primary key columns are grouped by.

Sometimes people, often including me, just do this fairly mindlessly, but sometimes it bites you. Consider this simple query:
SELECT a.id as a_id, a.properties, sum(b.amount) as expenses
FROM people a
   JOIN expenses b on a.id = b.person_id
GROUP BY a.id, a.properties
We don't really want the expenses grouped by the person's properties. We just put that in because the parser complains if we don't. And if people turns out to be a view which joins a couple of tables, we probably can't leave it out either. This can increase the amount of sorting that the GROUP BY requires, which can sometime have dramatic effects on performance. But even worse, there are cases where this can actually cause the query to be unrunnable. One such case is if properties is a JSON column.

That might surprise you. It has certainly surprised a couple of people I know. The reason is that there is no equality operator for JSON.

So, how can we write this so we only GROUP BY what we really need to? One way is to pick up the extra column later in the query, after we have done the grouping, like this:
WITH exp as 
(
  SELECT a.id as a_id, sum(b.amount) as expenses
  FROM people a
     JOIN expenses b on a.id = b.person_id
  GROUP BY a.id
)
SELECT exp.*, p.properties
FROM exp 
   JOIN people p ON p.id = exp.a_id
This might look a bit silly. We're adding in an extra join to people that we shouldn't need. But it turns out in my experience that this actually often works pretty well, and what you pay by way of the extra join is often paid for by the fact that you're simplifying the GROUP BY, and that it is processing smaller rows, uncluttered by the extra columns you want carried through. And, in the case of a JSON column, it has the virtue that it will work.

I often get called in to look at queries that run slowly and have huge GROUP BY clauses (I have seen them with 50 or so columns). I almost always start by reducing the GROUP BY to the smallest set possible, and this almost always results in a performance gain.

Redis commands from Postgres

A while ago a client wanted a way to call Redis commands from Postgres, for example to push values to Redis, or perform other Redis housekeeping that can't be done by use of the Redis FDW. I put together a quick Postgres interface using the official hiredis client library, and I have been polishing it up a bit in preparation for my talk at PgCon, and the package is now available for public use.

The basic interface contains functions to provide persistent connection handles, to disconnect, and to call the library functions redisCommand() and redisCommandArgv(). Yesterday, I added functions to push a record to a Redis “table”, and to drop a Redis “table”. I am planning to add a function to push a whole Postgres table or view, but for now this can be achieved by pushing the records - tests yesterday on very modest hardware managed to push 1 million rows in 130 seconds and drop the same table in 45 seconds. Of course, in Redis dropping a table means in effect dropping its component objects, so it's still doing several million operations here.

Of course, Redis doesn't really have tables in the PostgreSQL sense. It has a single global namespace which contains keys, which are strings, and values, which are various sorts of things (strings, integers, lists, hashes, sets, ordered sets). Hence my use of inverted commas above. For this purpose, a Redis table consists of a set of objects whose keys have a common prefix, or whose keys are the elements of a Redis set object. These uses are critical to the effective use of the Redis Foreign Data Wrapper, which I will also be expanding upon in my talk.

I also added some regression  tests which demonstrate the abilities of the interface package. I'll be working on improving the documentation shortly.


Monday, May 6, 2013

Reminder: you can't set high shared buffers on 32 bit systems.

Today I got a brutal reminder that using lots of memory on 32 bit systems is dangerous. One of my clients had followed all the usual advice and set shared_buffers at about 3Gb on a 32 bit 16Gb system. They started getting out of memory errors for no apparent reason. It was quite puzzling, as their system wasn't at all stressed. Then I remembered this excellent blog post by Robert Haas, who analyzes the situation perfectly. The client lowered shared buffers to 2Gb and the problem stopped.

Bottom line: no matter how much memory you have, and no matter whether or not you have a PAE kernel, there is a hard 4Gb limit on the address space of a single process on 32 bit systems, and thus the practical limit of shared buffers on ALL 32 bit systems is around 2Gb.

Utilities for building JSON

As we get more experience in using JSON in Postgres we get a better idea of what we need to make its use simple and effective. This work is the result of some such experience.

Today I'm releasing a new extension, json_build, which provides a couple of utility functions for building JSON, build_json_array() and build_json_object(). Both of these functions can take any number of arguments of any type, and they will build a JSON value of the appropriate kind. Note that JSON arrays, unlike Postgres arrays, can be heterogenous, and the values do not all need to be of the same type. When building an object, the arguments are treated pairwise, so it's an error if build_json_object() gets an odd number of arguments. The first item in each pair must be a non-null scalar, (i.e. not an array or record) and must not be a JSON value.

Any JSON value is simply passed through, either as an object field value or as an array element, so, since these functions return JSON, you can nest calls to them and thus build up your JSON values in a fairly natural way.

Examples:

SELECT build_json_object( 
       'a', build_json_object('b',false,'c',99), 
       'd', build_json_object('e',array[9,8,7]::int[],
           'f', (select row_to_json(r) from ( select relkind, oid::regclass as name 
                                              from pg_class where relname = 'pg_class') r)));
                                    build_json_object                                        
-------------------------------------------------------------------------------------------------
 {"a" : {"b" : false, "c" : 99}, "d" : {"e" : [9,8,7], "f" : {"relkind":"r","name":"pg_class"}}}
(1 row)

 SELECT build_json_array('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}');
                           build_json_array                            
-----------------------------------------------------------------------
 ["a", 1, "b", 1.2, "c", true, "d", null, "e", {"x": 3, "y": [1,2,3]}]
(1 row)

Enjoy!

Saturday, April 27, 2013

Use a VM when in any doubt

Long experience has taught me that some software is best run in a Virtual Machine where it can do relatively little harm to other components of your system. Right at the top of this list is almost all VPN software. The only exception I usually make is for OpenVPN, which is almost always very well behaved. I have a couple of VM's set up for just this purpose. They normally only get fired up when I need to connect to a client over a VPN.

The other day I needed to connect fairly urgently to a client's network, and they require use of a VPN. The VPN software in question, which is proprietary, is one I know from experience can cause problems. However, I could not get it working on my sacrificial VMs, which run Linux, and they told me that they couldn't get it reliably working on Linux either, and that they all connect from Windows. I didn't have a sacrificial VM running Windows, and so, despite my best judgement, I installed the software on a non-virtual Windows machine, that, among other things, runs a couple of buildfarm animals as well as a number of Linux VMs I am using currently on another project.

Big Mistake.

The VPN worked OK, but, when I was done, the Virtualbox network interfaces were hosed. It took me a long time - literally hours, to work out what had happened, since the problem only became apparent when the VMs were recycled, which was a couple of days after using the VPN software. I tried, in turn, disabling the VPN software, uninstalling the VPN software, rebooting, and updating Virtualbox. Nothing worked. Finally I did a system restore back to the point before I had first installed the VPN software. Suddenly everything worked again.

So today I have spent more hours setting up a Windows VM I can use for this purpose. First I had to go out and buy a legitimate copy of Windows, and then get it set up and installed in a VM on my laptop. I put it there so I still have it available when I'm on the road.

But this software still had more surprises for me. It's set up not to allow VPN connections to be made from remote desktop connections. And this is rather important to me since I don't carry my laptop around my house, and when at home I mostly don't work from the laptop. But it turns out that Virtualbox will run a remote desktop server for you and connect the guest console to it, so that the guest thinks the connection is local even when it's not. Take that, nasty VPN vendor!

All in all this has cost me the best part of two days to fix and configure. I'm more than mildly annoyed by it. But I have learned my lesson. I will never run this sort of software from a non-virtual host again.

Thursday, April 18, 2013

unnest_ordinality extension

The other day Josh Berkus was working on one of our clients and needed a fast way to unnest some arrays with ordinality, that is to have a row number attached to each element. The usual way that is recommended for doing this is to use subscripting with generate_series(), or calling unnest() in a plpgsql function and then returning the elements one by one. Both of these turned out to be not fast enough, and he was resorting to some fairly ugly code to get around the speed issue. I offered him a simple C function that would do this very quickly, and we're releasing it publicly today. In use on our client's site it has been extremely fast and simple to use.

example:
andrew# create extension unnest_ordinality;

    andrew=# select * from unnest_ordinality('{a,b,c,d,e,f,g}'::text[]);
     element_number | element 
    ----------------+---------
                  1 | a
                  2 | b
                  3 | c
                  4 | d
                  5 | e
                  6 | f
                  7 | g
The package is available on The PostgreSQL Experts Inc Github Repository and also on PGXN

Sunday, April 14, 2013

Vagrant buildfarm client

I've recently been exploring use of the nifty vagrant package for a project I'm about to start working on. So today in an idle moment waiting for some other stuff to complete I created a simple setup to allow you to create a complete and working, unregistered buildfarm client, all stashed away nicely in a ScientificLinux 6.3 x86_64 VM for you. If you have vagrant installed and working with VirtualBox, all you have to do to run this is:
git clone https://bitbucket.org/adunstan/sl6-x64-buildfarm-vagrant.git
cd sl6-x64-buildfarm-vagrant && vagrant up
Once the machine is up and running you can login and run the client straight away. It will create a git mirror, and configure, build, and test Postgres. All from one command:
cd ~vagrant/bf && perl run_build.pl --test --verbose
Enjoy.

Tuesday, April 9, 2013

Backport of 9.3 json enhancements


I've done a backport of the 9.3 JSON enhancements, as a 9.2 extension, or rather as much of them as it's reasonably possible to backport. There are some caveats to using this extension, so please read the warnings carefully before you use it.

If you use PGXN you can get it from there, or it can be got from the public git repository at bitbucket.

Thanks to David Wheeler for prodding me to finish this up.

Enjoy.

Thursday, April 4, 2013

Linux/ZFS is great

A few days ago the ZFS on Linux people finally produced a production release of their latest code. I had been playing around a bit with it on an Ubuntu VM, but yesterday I installed it on a customer's  RedHat 6.2 production system. This system will be used to store PostgreSQL backups made with the barman package, and the feature we most wanted to use was the builtin file system compression - mainly to save on space requirements. Installing it and turning a SAN volume into a ZFS volume turned out to be incredibly easy and painfree. And it's a win-win situation. The IO time saved by using a compressed file system has reduced the time for a backup by about 60%, and the space saved in our case is about 80%. All in all I'm impressed, and I'm certainly going to be trying out some more of its features. It will be nice to ascertain what modes will be best suited to production database use.

Friday, March 29, 2013

JSON processing functions and the new JSON parser API

Today, with a great sense of relief, I committed the second of my major JSON patches for this release cycle, just beating the imminent close of the current commit-fest.

Most of the attention has focussed on the new JSON processing functions, which you can see details of in the dev docs. But the most important part in my mind is really the less visible part, the scaffolding on which these new functions were build. Essentially this took the original (and very good, for its purpose) JSON parser that Robert Haas wrote for release 9.1, which was stack based, and turned it into a classic recursive descent parser. In doing that it provided the ability of functions calling the parser to supply callback hooks for certain parsing events, such as the beginning or end of a JSON object or array. It's kind of the JSON equivalent of an XML SAX parser. That means it's quite easy to construct new JSON processing functions without having to worry at all about the nitty gritty of JSON parsing. I expect to see a number of JSON processing extensions based in this (and I expect to write a few myself).

I'm hoping to give a talk at a conference later in the year about how to build these processing functions.

Sunday, March 24, 2013

Parallel pg_dump committed

Today I committed Joachim Wieland's  patch to add a parallel option to pg_dump when outputting directory format. Barring some unexpected and evil occurrence, this should be in the forthcoming 9.3 release. This is the culmination of several years of work. I started talking about this shortly after I did the parallel pg_restore work for release 8.4.

This was a lot more difficult and complex than parallel pg_restore. The three pieces that were required to make this work were: an archive format that we could sensibly write to in parallel, a way of making sure that a dump done in parallel was consistent, and then the actual code to use those features to achieve parallel dumping. The first piece was directory archive format, that was introduced with Release 9.1. The second piece was snapshot cloning, that was introduced in Release 9.2. This final piece builds on those two earlier pieces of work to make parallel dumps a reality.

This development throws away some of my earlier work, We now have a common parallel processing infrastructure that is used for both pg_dump and pg_restore, The new code pre-creates the required number of workers (threads on Windows, processes on Unix) and farms out multiple pieces of work to them.  This is quite different from the earlier way pg_restore worked, which created workers on demand and allocated exactly one piece of work to them.

Congratulations to Joachim on this milestone..

Friday, March 15, 2013

Recent feature requests

In the last few days customers have wanted the following PostgreSQL features:
  • A time zone setting of "system" which will just pick up whatever the system's time zone setting is at server start. This would be useful for appliance  makers who pre-install a database and then ship the appliance to a customer who can change the time zone, as one of our customers does.
  • The ability to set a default tablespace for a schema, and to move all the objects in the schema to that tablespace. This was apparently something that got ripped out in the very late stages of the 8.0 development cycle, but it doesn't seem like a terribly bad idea at first glance.

Wednesday, March 13, 2013

Writeable Foreign Tables

There are lots of cool things coming in 9.3, but the one I'm licking my lips over is one that was committed the other day by Tom Lane: Writeable Foreign Tables by KaiGai Kohei. As soon as I get done with the work I have in hand for core 9.3 I will be an early adopter, adding this to the Redis FDW. This is going to be very cool.

Saturday, March 9, 2013

Loading Useful Modules in PLV8

PLV8 is a trusted language. In fact it arguably safer than pretty much any non-core loadable language, since the interpreter is naturally sandboxed. So there's no way to load external processing modules from the file system. However, pure JavaScript modules can be cleanly loaded from the database itself. A couple of extremely useful ones I have found are Underscore and node-jPath. The former can be loaded with no modification whatever. The latter is written for use with node.js and needs a small amount of adjustment before it's loaded in plv8.

So how do you do this? Well, first you need to grab the code:
curl -o underscore.js http://underscorejs.org/underscore-min.js
curl -o jpath.js https://raw.github.com/stsvilik/node-jpath/master/lib/node-jpath.js
Then edit jpath.js so that, at the bottom, instead of assigning to module.exports it creates an object called jpath and adds the select and filter methods to it.

Then you'll need a table to load the modules from, and to put the modules in this table. You'll also need a function to load them. These can be accomplished with the following psql script:
\set underscore `cat underscore-min.js`
\set jpath `cat jpath.js`

create table plv8_modules(modname text primary key, load_on_start boolean, code text);

insert into plv8_modules values ('underscore',true,:'underscore'),
    ('jpath',true,:'jpath');

create or replace function plv8_startup()
returns void
language plv8
as
$$
load_module = function(modname)
{
    var rows = plv8.execute("SELECT code from plv8_modules " +
                            " where modname = $1", [modname]);
    for (var r = 0; r < rows.length; r++)
    {
        var code = rows[r].code;
        eval("(function() { " + code + "})")();
    }
        
};
$$;

Now test it out:
select plv8_startup();
do language plv8 ' load_module("underscore"); load_module("jpath"); ';
-- test the underscore module's extend function
do language plv8 $$ 
   x = {'a':1}; 
   y=_.extend(x,{'a':2,'b':3},{'b':4,'c':5}); 
   plv8.elog(NOTICE,JSON.stringify(y)); 
$$;
-- test jpath module's filter function
do language plv8 $$
  var jsonData = {
    people: [
        {name: "John", age:26, gender:"male"},
        {name: "Steve", age:24, gender:"male"},
        {name: "Susan", age:22, gender:"female"},
        {name: "Linda", age:30, gender:"female"},
        {name: "Adam", age:32, gender:"male"}
    ]
  };
  //We want to get all males younger then 25
  var match = jpath.filter(jsonData, "people[gender=male && age < 25]");
  plv8.elog(NOTICE,JSON.stringify(match));
$$;
You should see output like:
NOTICE:  {"a":2,"b":4,"c":5}
DO
NOTICE:  [{"name":"Steve","age":24,"gender":"male"}]
DO
Now, the final step is to make these get loaded automatically. Alter your plv8_startup() function by adding to following at the end:
// now load all the modules marked for loading on start
var rows = plv8.execute("SELECT modname, code from plv8_modules where load_on_start");
for (var r = 0; r < rows.length; r++)
{
 var code = rows[r].code;
 eval("(function() { " + code + "})")();
}
and finally add
plv8.start_proc = 'plv8_startup'
to you postgresql.conf file, and restart. You should now be able to use these modules anywhere in your PLv8 code.


Monday, February 25, 2013

A few notes from the field on git

Any time a source code management system gets in your way it's not doing its job. It's really something you should have to think about as little as possible. One thing I found out by doing it the wrong way the other day is that of you create a pull request on bitbucket or github, you really need to do it on a topic branch created just for the purpose. After the upstream repo you have sent the pull request to has merged your changes you can delete the branch. Cleaning up after I unthinkingly sent a pull request for the node-postgres driver from a long-lived branch was rather ugly.

I also wish it were possibly to exclude certain files (e.g. regression test result files) from having trailing whitespace colored. But if it is possible I have yet to find that way.

Saturday, February 23, 2013

Learning new technology

One of my oldest clients is switching to a new technology stack. After many years of using AOLServer, OpenACS and Postgres as their preferred basis for applications they have switched to using nginx, Redis, node.js and Postgres with plv8.

I'll be talking about some of this, especially the use of Redis with Postgres, at pgcon. in May.

I can't say I really like the node.js style of programming, but needs must when the devil drives. One thing worth noting is that there is quite a good node.js driver for Postgres, and that the maintainer was very responsive in merging in a patch I sent. I've sent in another one (to fix a bug connecting to Unix domain sockets) and I hope it too will be picked up very promptly.

Friday, February 15, 2013

PostgreSQL doc epub

The other day Peter Eisentraut made a small commit that was probably little noticed, but I think is very useful. He added a few lines to the documentation system to enable building the Postgres documentation as an epub document. This is really quite nice. The great advantage of epub documents over things like PDFs is that they are reflowable, so that they are adaptable to devices with a wide range of form factors. They are also typically much smaller than PDFs. So now I can put the PostgreSQL docs on my Ebook device. I can scale the font up if necessary and still have it look sane. And I can put the same file on an Android device, or an iPhone or iPad. There are free epub readers for almost every smart mobile device around. It won't work on a Kindle but you can convert it to Kindle format using some free software. There is also an epub plugin for Firefox. So this is cool.

Monday, February 4, 2013

Binding keys in psql

The psql client has a nasty habit of being a bit over-aggressive about when it routes output through the pager. So I oten find myself typing \pset pager to turn it off or on. Or I did up to today, when I finally got tired of it and added these lines to my .inputrc file:
$if psql
"\e[24~": "\\pset pager\n"
$endif
This binds the command to my F12 key, so now I can turn the pager on or off with a single key stroke.

Thursday, January 31, 2013

Json object from an array of key value pairs

I mentioned this the other day, and I now have an extension with a simple function to turn an array of key value pairs into a json object, much as you can turn such a thing into an hstore.

The motivation for this came from a discussion on what to do with structured data returned by the Redis Foreign Data Wrapper. Originally we discussed making it json or an hstore, but on reflection it seemed best to me simply to return it as a standard SQL array and use the database utilities to transform it into whatever is needed. So this function works pretty much like the hstore function except that here it is the caller's responsibility to ensure that the keys are unique. When used with the Redis FDW that won't be a problem - they will always be unique.

Tuesday, January 29, 2013

Redis FDW update.


Today I've published a major revision to the Redis Foreign Data Wrapper code for use with PostgreSQL versions 9.2 and later. There are two new features: support for Redis structured data types and support for restricted keyspace search.

The structured data type support is enabled by using the table option "tabletype", which can take one of the values "hash", "list", "set" or "zset". If this option is used the data comes back as an array rather than as a simple scalar value. In the case of "hash" tables, the values are a sequence of key/value pairs. For the other types they are simply the elements of the relevant structure. If the values column of these tables is defined to be of type "text[]"(i.e. array of text) then an actual array is returned. If the column is a plain text column an array literal is returned. Hash table arrays in particular can be turned into records via hstore's "populate_record" functionality, or transformed in other useful ways.

Restricted keyspace search is enabled by using one of the table options "tablekeyprefix" or "tablekeyset". These are mutually exclusive. "tablekeyprefix" restricts the search to keys with the given prefix. However, in a very large Redis database this might still be expensive. In that case, it might be better to keep a list of the keys in a separate set, and this is supported using the "tablekeyset" option. When this is used the global keyspace isn't searched at all, and the list of keys is simply taken as the members of the set.

The new functionality is demonstrated in the test script (which is also new).

I am also working on some facilities to push data into Redis and similar things, but these fall outside the current possibility of a Foreign Data Wrapper, and will be announced separately.

 Credit note: this work was supported by IVC

Monday, January 28, 2013

Array of Key Value pairs to Json

As I'm working on making the Redis Foreign Data Wrapper actually do something useful, it's occurred to me that one of the Json functions we're missing is the ability to turn key value arrays into Json, as we can with hstore. There's a bit of a mismatch in that  Json's treatment of numbers and booleans involves distinguishing these from strings, but it would be possible to have a "use heuristics" option as I recently did with hstore to Json conversion.

When I get done with the Foreign Data Wrapper I will probably whip up a quick extension to add this functionality.

Tuesday, January 22, 2013

Handling Redis Hashes

IVC, a great boutique software house that I have worked with for quite a few years, has been shifting to a new, more modern, software stack, part of which is Redis. They have asked me to work on improving the interface between Redis and PostgreSQL. Last week they mentioned that they are doing a lot with Redis hashes, and asked me to improve access to them. These are basically sets of key/value pairs. In PostgreSQL terms think of hstore for an equivalent.

So I came up with a simple idea of mapping a foreign table to a Redis hash. This actually isn't too hard to implement, and I got it working more or less yesterday:

andrew=# \! echo type foo | redis-cli
hash
andrew=# \d xxx
       Foreign table "public.xxx"
 Column | Type | Modifiers | FDW Options 
--------+------+-----------+-------------
 key    | text |           | 
 value  | text |           | 
Server: localredis
FDW Options: (hashkey 'foo')

andrew=# select * from xxx;
   key   | value 
---------+-------
 bar     | baz
 blurflk | asdf
 qqq     | ttt
(3 rows)
So far so good. You can treat a Redis hash as a little table and select from it. And we could perform similar operations for the other Redis data structures (lists, sets and zsets).  But I'm wondering if this is going to fit well with actual Redis practice. Let's say we're dealing with web session objects. In PostgreSQL you would probably have a table for these with a session id and an hstore or maybe a piece of json (especially after my json extractor functions get committed :-) ). But in Redis I think you're at least as likely to have one hash per session object rather than a hash of session objects, and Redis data structures don't nest, so there's no hash of hashes. So if you have thousands or millions of tiny session object hashes in Redis, with names like "web.session.a9f3c1763d" where the last part of the name is some fairly opaque session name, do you really want to be having to map each one of those to its own foreign table in order to be able to read the values nicely? I don't think so.

Following this thought I came up with the idea of a related FDW which would give you access to the whole of the hashspace in Redis. The tables would have three fields, not two, being, say, (hashname, key value). We could even impose a restriction on queries that would require you to specify a hashname, although that's possibly a bit ugly, and maybe we should just warn you against not supplying a hashkey query parameter.

So, which of these patterns is likely to be most useful? They are not actually mutually exclusive, and it's perfectly possible to implement both. But I would like to know what seems most useful to Redis users generally, especially if they are also using PostgreSQL.

Another possibility would be not to use the Foreign Data Wrapper interface at all, and just provide a set returning function which would take some connection parameters and a hash name and return the key value pairs. This too would be reasonably easy to do, as I already have (and will soon publish) a framework for generalised Redis access. On the other hand, one significant advantage of using Foreign Data Wrappers is that with FDWs handling pushed down quals ("where" conditions in effect) is rather easier to do, from what I can see.

Thursday, January 17, 2013

Mingw64 fail

Last week Gurjeet Singh asked me about getting the MinGW64 compiler working. When I looked I noticed that they has not made an official build for Mingw hosts (as opposed to linux hosts and darwin hosts) since late 2011. I went on IRC and asked why, and was told that they had switched to GYP to generate builds and it was broken on Mingw. Apparently nobody in the GYP project is interested in fixing it because Mingw isn't one of their supported platforms. Someone gave me this URL that describes the problem: http://code.google.com/p/v8/issues/detail?id=2197#c6

I was, as they say, gobsmacked. Nobody even really seemed to care that much.

If the build tool doesn't support one of your three main target platforms, and moreover it's the only target which isn't cross-compiling, STOP USING IT.

Tuesday, January 15, 2013

Inline is useful

As I was looking for ways yesterday to claw back some of the cost of switching to a recursive descent parser for json, I was interested to see I could get a small but quite observable improvement simply by declaring a few hotspot functions as "inline". This means that the compiler will expand calls to them rather than generating a function call, thus saving the overhead of setting up a stack frame and doing a procedure call. The benefits of doing this vary depending on the compiler and the architecture, but it can be substantial.

Not everything is a candidate for inlining - external functions can't be inlined because the call sites aren't available when they are compiled, and recursive functions can't either because it could cause infinite regress.

One downside is that it can make debugging harder, so it's often best to leave inlining to late in the development process.

Anyway, I'll probably pay rather more attention to it in future.

Monday, January 14, 2013

Json performance

Robert Haas wanted to know how much slower the recursive descent JSON parser would be than the explicit stack based one it's replacing. It's a reasonable question, and no doubt one close to his heart as he was the author of the original :-)

So we need a test, and I devised one by making a single large piece of json like this:
create temp table myjson as 
    select array_to_json(array_agg(q)) as jsontext 
    from (select * from pg_class) q;
\copy myjson to jsontext.csv csv
and a test script that looks like this:
create temp table myjson(j text);
\copy myjson from  jsontext.csv csv
\timing
do $$ declare lj text; begin select into lj j from myjson; 
  for i in 1 .. 10000 loop perform json_in(textout(lj)); 
  end loop; end; $$;

(In the actual test file the last line is repeated several times).

Robert had also pointed out that the parser was copying some strings in places where it didn't need to, and I quickly fixed that. But even after that change I was a bit dismayed to discover that there was a small but quite consistent performance degradation, So I went looking for things to tweak. It didn't take long to see that some things I had written as "if" statements repeatedly calling the routine that looks at the next look_ahead token could be much better written as a single call to that routine plus a "switch" statement.

The test script and data files are published at https://bitbucket.org/adunstan/pgdevel/downloads/jsontest.sql and https://bitbucket.org/adunstan/pgdevel/downloads/jsontext.csv The latest code with the above fixes can be pulled from https://bitbucket.org/adunstan/pgdevel.git, and you're welcome to play along. When this is all sorted out I will publish a new patch to the mailing list. Right now I can no longer get a consistently better result using release 9.2 than I get with my code. One of my colleagues is testing on a more stable platform than I have available. But I'm quite hopeful not that at worst the performance is fairly close.

Friday, January 11, 2013

Version 4.10 of the buildfarm client has been released.

Following GitHub's abandonment of their download feature, releases will now be published on the buildfarm server. The latest release will always be available at http://www.pgbuildfarm.org/downloads/latest-client.tgz This particular release is available at http://www.pgbuildfarm.org/downloads/releases/build-farm-4_10.tgz

The main feature of this release is that it does better logging of pg_upgrade failures.

The rest is minor bug fixes and very small enhancements.

Tuesday, January 8, 2013

Oh for decent recruiters in our industry.

The cluelessness of most IT recruiters never ceases to astound me. I received this today:
My name is (deleted) and I'm an IT recruiter at (deleted). Our records show that you are an experienced IT professional with experience relevant to one of my current contract openings.(deleted)
The job is located in (deleted) with one of our Fortune direct client. They are looking for a GUI Developer
 Not only am I not qualified for this job, but I have proclaimed loudly my intention never again to do GUI work,

Plonk.

Monday, January 7, 2013

Fixing pgindent

It's sometimes amazing how hard finding a single misplaced or wrong character can be. I was puzzled and annoyed by some very odd behaviour of pgindent with the Json code I've been working on, and finally got a clue to what was going wrong when I found it indenting the whole source tree  when given a single file argument. That led me to the source of the problem, and a fix which involved removing a single spurious backslash from the code. Since I wrote it I have no-one to blame but myself for the wasted 90 minutes. Those are the sorts of things that make you give yourself a mental kick in the pants.

Thursday, January 3, 2013

GitHub kicks an own goal

When I moved the Buildfarm code from pgFoundry to GitHub, I started distributing release tarballs from there too. But now GitHub has decided that they won't support that any longer. I suspect that they are being a bit disingenuous about their reasons, which probably have more to do with abuse than confusion.

I will probably move the releases to the buildfarm server - they are tiny and there is not a huge demand. But GitHub isn't the only kid on the block. They need to remember that. I know for sure I'm not the only person annoyed about it.