infinite quanta, inc., a 501(c)(3) NonProfit Scientific Research Organization Home Page The Peptide Reseach Project Our Organization Our Associates Collaboration News & Events [ Log In | Sign Up | Lost password? ]

PostGreSQL on the Raspberry

Thursday, February 11, 2021 - PostGreSQL is an excellent relational database system. It runs on all platforms, including the Raspberry, is light-weight, free, and open source.


PostgreSQL, also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance. It was originally named POSTGRES, referring to its origins as a successor to the Ingres database developed at the University of California, Berkeley. In 1996, the project was renamed to PostgreSQL to reflect its support for SQL. After a review in 2007, the development team decided to keep the name PostgreSQL and the alias Postgres.

Installation on Raspbian begins with:

> apt-get install -y postgresql postgresql-client postgresql-client-common
> apt-get install -y python3-psycopg2
> sudo -u postgres psql -c "SELECT version();"
PostgreSQL 11.11 (Raspbian 11.11-0+deb10u1) on arm-unknown-linux-gnueabihf, compiled by gcc (Raspbian 8.3.0-6+rpi1) 8.3.0, 32-bit

where the native version of PostgreSQL is 11.11 on this 2021-03-04 image of Raspbian. Setting up so basic privileges for psql users:

> su - postgres
> psql template1
>> alter user postgres with superuser password 'funky_password_noone_knows';
ALTER ROLE
>> create user htruman with superuser password 'another_funky_password_noone_knows' createdb;
CREATE ROLE
>> \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 htruman   | Superuser, Create DB                                       | {}
>> create database test1;
CREATE DATABASE
>> grant all privileges on database test1 to postgres;
GRANT
>> \l+
                                                                    List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7660 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7660 kB | pg_default | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7660 kB | pg_default | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |            | 
 test1     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +| 7660 kB | pg_default | 
           |          |          |             |             | postgres=CTc/postgres |         |            | 
(4 rows)
>> \dg
>> \q

Since much or all of the instrument automation will be run through python scripts, python must be able to communicate effectively with postgresql. The psycopg2 driver for python accomplishes this well and in native postgres SQL language. The following creates a python script that accesses the postgres db, queries version and date/time information and prints it to the CLI std out:

> cd ~
> nano psql_test.py
#!/usr/bin/env python3
#coding: utf-8
import os
import psycopg2, psycopg2.extras
from datetime import datetime
home_folder, script_name = os.path.split(os.path.abspath(__file__))
home_folder = home_folder+"/" #gets the absolute path of this python file
try:
    print('home_folder = "'+home_folder+'"')
    print("dbname='template1' host='%s' user='%s' password='%s'" % ('localhost', 'postgres', 'funky_password_noone_knows'))
    c = psycopg2.connect("dbname='template1' host='%s' user='%s' password='%s'" % ('localhost', 'postgres', 'funky_password_noone_knows'))
    q = c.cursor(cursor_factory=psycopg2.extras.DictCursor)
    print("db connected")
    print('bos...processing')
    q.execute('select * from version();')
    x = q.fetchone()[0]
    x = ' '.join(x.split(' ')[:2])
    print("psql version:", x)
    q.execute('select * from now();')
    x = q.fetchone()[0]
    print("psql datetime:", x)
    q.execute('select current_date, current_timestamp, localtime;')
    x = q.fetchone()
    print("type of: current_date, current_timestamp, localtime: ", [i.__class__.__name__ for i in x])
    print("psql date:", x[0].strftime('"%Y-%m-%d %a"'), "| w/time:", x[1].strftime('"%B %-d, %Y %A %Y-%m-%d @ %H:%M:%S %Z"'), "| localtime:", x[1].strftime('"%H:%M:%S"'))
    print('eos...processing')
finally:
    if ('q' in locals()) and (q is not None):
        q.close()
    if ('c' in locals()) and (c is not None):
        c.close()
    print("db disconnected")
print("exit")
exit()
>> Ctrl-X, Save? >> Y, Enter
> chmod o+x psql_test.py
> ./psql_test.py
home_folder = "/home/pi/"
dbname='template1' host='localhost' user='postgres' password='funky_password_noone_knows'
db connected
bos...processing
psql version: PostgreSQL 11.11
psql datetime: 2021-04-15 17:18:11.554997-04:00
type of: current_date, current_timestamp, localtime:  ['date', 'datetime', 'time']
psql date: "2021-04-15 Thu" | w/time: "April 15, 2021 Thursday 2021-04-15 @ 17:18:11 -04" | localtime: "17:18:11"
eos...processing
db disconnected
exit

Stepping back to the larger picture, postgres tables will be created using web2py, web2py on the Raspberry, so that the Data Abstraction Layer (DAL) of web2py will have syntactically clean code in the web server side. The python scripts will then access and use those tables and fields created under web2py. Once measurements and experimental data are dumped into postgres, the database will be able to send an event, called an server-sent event (SSE), to any listening web browsers and then update their user-interface controls, i.e., status, meters, graphs, etc., to the users monitoring the experiment.

Option 1: It might be needed to allow postgres access outside the localhost RPi. To do that, modify the pg_hba.conf and the postgresql.conf files accordingly and restart its daemon, or:

> cd /etc/postgresql/11/main
> nano pg_hba.conf
#added under # TYPE.. ., bottom of file.. .
host    all             htruman        192.168.1.0/24          md5
>> Ctrl-X, Save? >> Y, Enter
> nano postgresql.conf
#above the #listen_address line, add:
listen_addresses = '*'
#listen_addresses = 'localhost'         # what IP address(es) to listen on;
>> Ctrl-X, Save? >> Y, Enter
> systemctl restart postgresql

where if db hosting is on aws, then access is going to need to be added to aws's security group, and a rule added to the server's firewall for port 5432, i.e., iptables. Therefore, there could be up to four levels of security to gain access to the db: (i) aws security, (ii) firewall, (iii) postgresql.conf, and (iv) pg_hba.conf.

Option 2: I've used the CLI psql to access databases, tables, and data directly, but I leave it as an option to install the graphical interface to postgres, called pgAdmin:

> apt-get install pgadmin3

Please Register / Login here to post or edit comments or questions to our blog.

Back to the Main Blog Page