Monday, December 5, 2016

Postgres Cross-Server/Cross-Database Querying

To demonstrate cross database quering we will create two databases "PlaygroundLive" & "PlaygroundStaging".Both these database can be on same server or different postgresql server.In current case they are on same server.

On both databases create below table

create table TestTable
(
  id int primary key,
  name varchar(500)
)

on staging add three rows

insert into TestTable(id,name) values(1,'sagar'),(2,'sangram'),(3,'sachin');

on live add two rows

insert into TestTable(id,name) values(1,'sagar'),(2,'sangram');

Now create extensions called dblink & foreign data wrapper

CREATE EXTENSION postgres_fdw;

CREATE EXTENSION dblink;

Check if extension got created or not by checking newly added procedures/functions related to new extension db_link.

SELECT pg_namespace.nspname, pg_proc.proname FROM pg_proc, pg_namespace WHERE pg_proc.pronamespace=pg_namespace.oid AND pg_proc.proname LIKE '%dblink%';

On our staging Database connect to live database,check if connectivity succeed or not,query below give OK on success

SELECT dblink_connect('host=localhost user=postgres password=sangram dbname=PlaygroundLive');

CREATE FOREIGN DATA WRAPPER dbrnd VALIDATOR postgresql_fdw_validator;

Creating server connection:
Here we have both database on same postgres sql server yet we have to follow same process we need to provide server details & server IP & database to connect for.
CREATE SERVER demodbrnd FOREIGN DATA WRAPPER postgres_fdw OPTIONS (hostaddr '127.0.0.1', dbname 'PlaygroundLive');

you can check if server creation succeeded by running below command

SELECT * FROM pg_foreign_server; 
 
CREATE USER MAPPING FOR postgres SERVER demodbrnd OPTIONS (user 'postgres',
password 'sangram');
 
here postgres user on local server is mapped to postgres user on remote server

Connect to Server:

SELECT dblink_connect('demodbrnd');

GRANT USAGE ON FOREIGN SERVER demodbrnd TO postgres;

Running Cross Database Queries:
now from staging database we can run queries to live database


1) Select:
SELECT * FROM public.dblink('demodbrnd','select id,name from public.TestTable') AS DATA(id INTEGER,name CHARACTER VARYING);

it will have two records in output.

2) INSERT:
SELECT dblink_connect('demodbrnd');
select * from dblink_exec('INSERT INTO public.TestTable values(3,''sachin'')')

One more record will get added to TestTable on Live database.Now TestTable on both database are identical.

3) SELECT INTO:
SELECT * INTO temp_TestTable
FROM public.dblink('demodbrnd','select id,name from public.TestTable') 
AS DATA(id INTEGER,name CHARACTER VARYING);

Here temp_TestTable is created on Staging database will get all three 
records from TestTable in Live database.

4) Parametrized SELECT:
--using parametrized query with prepare
PREPARE migrate_data (integer) AS
INSERT INTO temp_TestTable
SELECT id, name
FROM dblink('demodbrnd', 'select id,name from public.TestTable')
AS t(id integer, name varchar)
WHERE id > $1;

EXECUTE migrate_data(2);

execution of prepare statement will add one more record to temp_
TestTable so there will 4 records now.

--check one more record got inserted
select * from temp_TestTable

TO delete Foreign data wrapper follow following sequence of drop 
queries as object stacked one above other.
Create Schema called “ForeignSchema” in staging database

Now import all tables from remote database public schema into staging 
atabase “ForeignSchema” as follows.

IMPORT FOREIGN SCHEMA public
FROM SERVER demodbrnd INTO "ForeignSchema";

Now remove 1st record from staging database table so that we are sure that 
both table differ as follows

delete from public.TestTable where id=1;

you can check that he our “TestTable” from live server has been imported 
into local database in schmema “ ForeignSchema” by running command 

select * from "ForeignSchema".TestTable;

Now we can run insert into statement also:
insert into "ForeignSchema".TestTable
select id+3,name from public.TestTable;

DROPPING Foreign Data WRAPPER: 
drop USER MAPPING FOR postgres SERVER demodbrnd ;
drop SERVER demodbrnd;
drop FOREIGN DATA WRAPPER dbrnd;

Sunday, December 4, 2016

Installing SQL SERVER on Fedora 23

Here are my notes on installation of MSSQL SERVER on Fedora.

First become root
    # su -

Microsoft sql server on fedora requires two new YUM repository one for actual sql server and other for command line tool to query it.

Add Microsoft Software repository to your Fedoara Machine as follows

   #curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo

   #curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/msprod.repo

this will basically create two files in /etc/yum.repos.d/ folder namely “mssql-server.repo” & “msprod.repo”

Installing SQL Server :
   to install sql server issue following command

         #yum install -y mssql-server

this will install package yet we need to configure it by issuing following command

       #/opt/mssql/bin/sqlservr-setup

this will ask for System Administrator (SA) account password which requires two be 8 character,also ask if you want to start sql server at time of startup.

You can check if server is running by issuing following command

    #systemctl status mssql-server

output of this command in my case is

mssql-server.service - Microsoft(R) SQL Server(R) Database Engine
Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2016-12-04 20:04:11 IST; 27min ago
Main PID: 6431 (sqlservr)
Tasks: 144
Memory: 547.2M
CPU: 45.053s
CGroup: /system.slice/mssql-server.service
├─6431 /opt/mssql/bin/sqlservr
└─6493 /opt/mssql/bin/sqlservr

Dec 04 20:04:19 SangramOnFedora23 sqlservr[6431]: [145B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [66B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [75B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [96B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [100B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [71B blob data]
Dec 04 20:04:20 SangramOnFedora23 sqlservr[6431]: [124B blob data]
Dec 04 20:09:22 SangramOnFedora23 sqlservr[6431]: [71B blob data]
Dec 04 20:11:04 SangramOnFedora23 sqlservr[6431]: [156B blob data]
Dec 04 20:11:04 SangramOnFedora23 sqlservr[6431]: [193B blob data]

Microsoft SQL server runs on port 1433 we need to open it for commutation with server from outside of localhost.

If you firewall is running & blocking 1433 port then add exception to it

        # firewall-cmd --zone=public --add-port=1433/tcp --permanent
       # firewall-cmd --reload

If we have sql server up & running we can install tool to connect to it

    #yum install mssql-tools

This package needs “msodbcsqlpackage and it has dependency on “unixODBC” package.
In my case version of “ unixODBC” is higher than what “msodbcsql” can handle probably in with GAMBAS so I was getting error in installation of “mssql-tools

after quick googling I got a solution what we need is remove existing unixodbc package as follows

#yum remove unixODBC

then we need to add new repo & import corresponding keys

#yum-config-manager --add-repo https://apt-mo.trafficmanager.net/yumrepos/mssql-rhel7-release/
#yum-config-manager --enable mssql-rhel7-release
#wget "http://aka.ms/msodbcrhelpublickey/dpgswdist.v1.asc"
#rpm --import dpgswdist.v1.asc

and then install “msodbcsql

#yum install msodbcsql

after successful installation try to install “mssql-tools

#yum install mssql-tools

On success we will check how to connect to server from command line.
Run below command,do not forgot to replace SA account password with your sql server’s SA password.

#sqlcmd -S localhost -U SA -P 'SA_Account_PASSWORD'

we will run simple command to check if we are through.First type some sql command say 

select name from sys.databases;

hit enter and then type “go”  do not end it with semicolon and hit again to see result of sql query.

Output should be similar to below:

name                                                                                                                            
--------------------------------------------------------------------------------------------------------------------------------
master                                                                                                                          
tempdb                                                                                                                          
model                                                                                                                           
msdb 

to come out of sql-cmd command prompt need to use "quit" command.

We can use universal sql GUI tools like squirrel sql , dbeaver to connect to our newly installed 
sql server. Though  I got some success with squirrel sql but it was messed between Oracle JAVA & OPEN JAVA 
when I tried it on Ubuntu 16.We will explore this soon.

References:
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-red-hat

Sunday, September 11, 2016

Running Ubuntu headless in Fedora 23 on Docker

First we will add docker repository for Fedora 23 for that inside directory

/etc/yum.repos.d/ 

add "docker.repo" file in which add following content.I use nano a simple command line text editor for it through root privileges.most of tutorial assumes root previlages on bash shell.

docker.repo” file should look like

[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/fedora/23
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg


now we will update repository as follows

dnf update

lets install docker on our fedora machine

dnf install docker-engine

let’s enable docker service as follows

systemctl enable docker.service

need to start docker service as follows

systemctl start docker

we can check status of docker service is running or not using below command

systemctl status docker

Now we will install ubuntu image

docker pull ubuntu

once we pulled ubuntu image we can run it

docker run -it ubuntu
  
docker run -it ubuntu:latest /bin/bash

every time we run a image it creates new container we can see list of container using following command

docker ps -a

by the way to see list of pulled images we can use following command

docker images

once we have command prompt of container we can install some package that are useful in our quest

apt-get install nano

this will install nano command line text editor while below command will install subset of networking related tools that are suitable for docker container image

apt-get install -y inetutils-ping

we can check ip address of ubuntu server from fedora command prompt as follows

docker inspect e350390fd549

output of above command will be json string in which find a key "IPAddress" which will give us ip address of Ubuntu server running in docker,here ‘e350390fd549is container id that we can get by issuing docker ps -a or simply by looking at command prompt of ubuntu machine.

Adding new users

adduser sangram

adduser sagar

add user sangram to sudo group

usermod -aG sudo sangram

Install ssh server as follows

apt-get install openssh-server

modify setting of ssh demeon as follows.

Open config file

nano /etc/ssh/sshd_config

modify it to allow root login,X11 Forwarding,enable password authentication ,change port from default value to suitable value.check if DSA & RSA & Public key authentication is enabled or disabled check where it will look for password.There may be text similar to below

DSAAuthentication yes
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile %h/.ssh/authorized_keys

change flags suitably.

create public ,private keys using either rsa & dsa

ssh-keygen -t rsa

it will create two files under .ssh dir in you home directory of current user (root),in same location create a file

authorized_keys 
 
and add content of public key to it.Now copy the private key to you Fedora machine convert it to suitable format using puttygen command

puttygen id_dsa  -O private-openssh -o my-openssh-key

restart ssh serverusing below command replace restart by start if not running.

 /etc/init.d/ssh restart

now from fedora machine try to run putty and provide private key in ssh auth change IP address to Ubuntu server ip address also check port is same as one in which ssh server is running.

From command prompt you can check it as follows

ssh -p 2221 root@175.17.0.2

where 2221 is ssh port & 175.17.0.2 is ip address of you ubuntu server.

Lets install virtual framebuffer & lightweight window manager (jwm) & vnc server as follows

apt-get install xvfb

apt-get install x11vnc

jwm is very small window manager for X server.we are using xdm instead of gdm or kdm,xdm is default for X server.
 
apt-get install jwm

apt-get install xterm

apt-get --reinstall install xfonts-base

to assign password to vnc login issue following command

x11vnc -storepasswd

it will ask for password & confirmation do not change location where password will be stored default location will be /etc/x11vnc.pass

we need to define display settings as follows

Xvfb -screen 0 800x600x16 -ac &

now we will launch window manger JWM as follows

DISPLAY=:0 jwm &

to view our Window manager lets start vnc server as follows

x11vnc -rfbauth /etc/x11vnc.pass -display :0

now from fedora machine launch VNC viewer say tiger VNC viewer in that put ip address of our Ubuntu server it will ask for vnc password provide one you will see JWM running.

you can save changes made in running container to same image or new image so that you can latter work further on this as follows

docker commit e350390fd549 nameofconatainer

wish you best of luck.In case i miss some step in between(as this is done over weekends) please correct me.
 

Wednesday, August 31, 2016

Linq - Group by into an array

LINQ has lot of capability that we seldom use.For example sometime we need to group a column into a string array or collection of some sort like say list based on another column.

Here I will demonstrate same into an array.

For purpose of demonstration I will create a class in some existing project

  public class KeyValue
    {
        public string KeyCol { get; set; }
        public string ValueCol { get; set; }
    }
 
Now I will create simple array of type 'KeyValue' just created.

  var wordList = new Model.DTO.KeyValue[] {
                new Model.DTO.KeyValue {KeyCol="key1", ValueCol="value1" },
                new Model.DTO.KeyValue {KeyCol="key2", ValueCol="value1" },
                new Model.DTO.KeyValue {KeyCol="key3", ValueCol="value2" },
                new Model.DTO.KeyValue {KeyCol="key4", ValueCol="value2" },
                new Model.DTO.KeyValue {KeyCol="key5", ValueCol="value3" },
                new Model.DTO.KeyValue {KeyCol="key6", ValueCol="value4" }
            };

Lets build a linq query on this collection(Array).Query groups 'KeyCol' property based on 'ValueCol' into group 'g' i.e. means g is collection of 'KeyCol'

            var query =from m in wordList
                     group m.KeyCol by m.ValueCol into g
                     select new { Name = g.Key, KeyCols = g.ToList().ToArray<string>() };

You can check resultset my looping  'query' 

    foreach (var rec in query)
    {
        var name = rec.Name;
        var strArray = rec.KeyCols;
    }

e.g in loop above when 'name' becomes 'value1' ,'strArray' will have value string[]{'key1','key2'} and so on.

we can further generalize this query to create object array in group by.

Wednesday, August 17, 2016

Table valued function parameter in postgres SQL

   In Postgres -SQL server whenever we create a new table it automatically creates a composite Type that corresponds with structure of table. we can use this type to create a variable ,it can hold a single row of corresponding table.
In situation where we need to pass whole result-set as parameter to Postgres function there seems to be no native support in postgres unlike Microsoft SQL server so converting a existing MSSQL Procedure to Postgres function become quite a headache.
we can create an equivalent representation of result set in XML or JSON and pass it to function; inside function we can deserialize this into result-set for purpose of further operation on result-set that way we can compensate absence of Table Valued Parameters in Postgres SQL .
 Along with JSON & XML we can also create an array of objects from result-set to function as parameter.
Here I will demonstrate this approach with inbuilt Array functionality of Postgres.

Lets create a Table as follows
CREATE TABLE public."Projects"
(
   id integer NOT NULL,
  "createdAt" timestamp with time zone,
  "updatedAt" timestamp with time zone,
  name character varying(255) NOT NULL,
  "UserId" integer
);

For purpose of demonstration add few records into “Projects” table with two different user ids say 1 & 2.

Now let’s create a one more table that will used for passing result-set to our function, function is trivial one it just give count of rows user wise

CREATE table projectlist
(
                list int[],
                user_id int
);

Now we will use ROW construct to serialize our result-set and save that as a single row in projectlist table as follows

insert into projectlist(user_id,list)
select  1 "UserId",ARRAY(select id from "Projects" where "UserId" =1) ;

insert into projectlist(user_id,list)
select  2 "UserId",ARRAY(select id from "Projects" where "UserId" =2) ;

The Array function will convert values passed to it as an array e.g if it got 3 records as input  14,15,16 then resultant value will {14,15,16}. unnest is another Postgres function that do exactly reverse of it that we will use for de-serialization.
Below is my function that will read the record set

CREATE OR REPLACE FUNCTION public.demo_func(
    IN  list1 projectlist,
    IN  list2 projectlist)
  RETURNS TABLE(user_id int, count bigint) AS
$BODY$
BEGIN
               
        RETURN QUERY
                WITH  list1_cte AS (
                              
                ),
                list2_cte  AS (
                              
                )
                select  list1_cte.user_id,count(*) as count from list1_cte group by list1_cte.user_id
                union
                select  list2_cte.user_id,count(*) as count from list2_cte group by list2_cte.user_id;

END
$BODY$
LANGUAGE plpgsql ;

Now we need to join all dots and call our function by passing serialized result set.We create two table type variables list1 & list2 and give them serialized data that we had already inserted into our table created for same purpose.
   After that we will call our function that will return some records that I am looping using cursor.

DO
$$
DECLARE list1 projectlist;
DECLARE list2 projectlist;
DECLARE cursor_op_record RECORD;
DECLARE cursor_var refcursor;
BEGIN
                select p.* into list1  from projectlist p where user_id=1;
                select q.* into list2  from  projectlist q where user_id=2;
   OPEN cursor_var FOR
   select user_id,count from public.demo_func(list1,list2);

   RAISE INFO 'user_id, count';
   LOOP
                FETCH cursor_var INTO cursor_op_record;
                EXIT WHEN NOT FOUND;

        RAISE INFO '%,%', cursor_op_record.user_id, cursor_op_record.count;
   END LOOP;
END$$;

Here  our table projectlist has list column which is just an int[] but we can also replace it with array of composite type object and pass a single tuple.

Monday, August 8, 2016

Some useful queries for Postgres beginners:


    After Mysql ,Now Postgres SQL has been gaining popularity at multiple level in software development.It can be freely downloaded from internet.Postgres Sql server comes with command line utilities to query,backup , restore and what more.For all those spoiled by GUI tools like me ,PgAdmin 3 is right choice.

PgAdmin 3 is GUI tool similar to SQL Server Management Studio for MSSQL server which is freely available. Postgres SQL comes with tones of new features. Its DML syntax is quite similar to Oracle,Mysql along with some own stuff but point to note is postgres is case sensitive means table with name Users' is different from table with name 'users'.

During my exploration of postgres SQL, I collected some useful queries, obliviously from resources on internet.Many of them are from stackoverflow.Thanks to Stackoverflow for being great place for seeking information.

 There are multiple ways to do same thing,neither I claim its the efficient way to do task yet may be useful to beginners to find it in one place.

Here are some queries from my collection


Lists Databases:

SELECT datname FROM pg_database WHERE datistemplate = false;

        or 

\list

Select database:

\c poc

Lists all tables in the current database:

SELECT * 
FROM information_schema.tables 
WHERE table_type = 'BASE TABLE' 
    AND table_schema = 'public' 
ORDER BY table_type, table_name

        or

\l

Creating a table with Identity /auto increment columns:

CREATE SEQUENCE user_id_seq;
CREATE TABLE user (
    user_id smallint NOT NULL DEFAULT nextval('user_id_seq')
);

ALTER SEQUENCE user_id_seq OWNED BY user.user_id;

All Constraints on Given table:

select * from information_schema.table_constraints where table_name='Projects'

Postgres tables without foreign key

SELECT *
FROM
information_schema.tables
WHERE
table_type = 'BASE TABLE'
AND table_schema = 'public'
and table_name not in (select distinct table_name from information_schema.table_constraints where constraint_type = 'FOREIGN KEY')


Postgres tables with foreign key:

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name
WHERE
constraint_type = 'FOREIGN KEY'

Pg-dump backup & restore at command prompt:

Pg-dump is command line utility in postgres used to backup & restore database.

Backup:

pg_dump -U postgres dbname -f dbname_pg_dump_20160715.sql

Restore:

psql -h localhost -d dbname -U postgresuser  dbname < E:\Repo\project\DBScript\dbname_pg_dump_20160715.sql

List foreign key on given table:

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name='Projects';

Table referencing given table:
Below query list all table that reference User table.

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY' AND ccu.table_name='Users';

List functions in given database:

SELECT
distinct routines.routine_name
FROM
information_schema.routines
WHERE
routines.specific_schema=' public'
ORDER BY
routines.routine_name


more detailed query can be

SELECT
routines.routine_name,
parameters.data_type,
parameters.ordinal_position,
routines.specific_schema
FROM
information_schema.routines
JOIN information_schema.parameters ON routines.specific_name=parameters.specific_name
WHERE
routines.specific_schema='public' AND routine_type='FUNCTION'
ORDER BY
routines.routine_name,
parameters.ordinal_position;

Creating New user:
   CREATE USER MyUser WITH PASSWORD 'abcd546fg';

To make a user a SuperUser:
    ALTER USER MyUser WITH SUPERUSER;

To make a user no longer a SuperUser:
    ALTER USER MyUser WITH NOSUPERUSER;

To just allow the user to create a database:
    ALTER USER MyUser CREATEDB;


Grant Rights to user:

GRANT ALL PRIVILEGES ON DATABASE DbName to MyUser;
GRANT EXECUTE ON  FUNCTION DbName.public.fn_test;
GRANT ALL ON  DbName.public."MyTable"  to MyUser;
GRANT SELECT,INSERT,UPDATE,DELETE ON  DbName.public."MyTable"  to MyUser;

Revoke Given Rights:

REVOKE TRUNCATE ON MyUser.public."Users" FROM MyUser;
REVOKE DELETE ON MyUser.public."Users" FROM MyUser;

Change Owner of Table,Sequence:
ALTER TABLE DbName.public.my_table OWNER TO MyUser;
ALTER SEQUENCE "Tablename_Columnname_seq" OWNER TO MyUser;
ALTER DATABASE DbName OWNER TO MyUser;

Rename User:
ALTER USER MyUser RENAME TO MyNewUser;

Set password:
ALTER USER MyUser WITH PASSWORD 'dljhfdkjs67N'

List all sequence:
SELECT c.relname FROM pg_class c WHERE c.relkind = 'S';


Postgres Service Start, Stop(Window 10 only):
NET STOP postgresql-x64-9.5
NET START postgresql-x64-9.5


Find list of table with given column name:

select table_name from information_schema.columns where column_name = 'id'

Reseed existing Sequence:

SELECT setval('sequence_name', COALESCE((SELECT MAX(id)+1 FROM table_using_seq_as_id), 1), false);

here last param in 'setval' is set to false which tell that it is not yet called with this number effectively next record in our table will have the value passed in second param otherwise increment it by one.

Please feel free to correct queries if  needed.