On Cloud9

I have long reticule cloud as mere hype. I have primarily worked in IT service Industry where applications are huge, they run on huge mainframe or UNIX server which are hosted in organizations premises. Since I started my career with healthcare provider, I know how much important data security is for any enterprise.

Also, upgrading server OS version to latest version itself used to be big task, usually companies delayed upgrade for the fear of breaking something unknown. Main goal used to be ” Don’t fix if its not broken”. his lead to OS upgrade to the very last moment when vendor decides to stop supporting older version. Nonetheless there used to be many applications which continued to run on unsupported OS version, and in some cases, product itself is out of support.

In such a cases I always used to wonder under which business case, organizations would like to migrate to cloud and loose control over infra.


Cloud does of easy scalability and infra support but my understanding was that, this is concern for mostly smaller organizations or start ups. For big corporations, spending money on scaling infra and acquiring experts to support is not a challenge.

However I do recognize now that many companies, not just new age companies like Netflix etc are using cloud, even the older generation Telecom companies are looking to use cloud for some of its product. It could be private could or hybrid cloud but still, there is some adoption.

obvious leaders in cloud as of now is AWS, followed by Azure by Microsoft. Google cloud platform (GCP) is distant third, however if you are interested in big data and ML services along with cloud, GCP is the go to option.

Currently, I am working in Telecom domain, however I think, I need to move from Telecom to cross domain area like Cloud, Machine Learning , Data Science, NLP. Considering this, I have decided to get certified  in GCP Cloud Architect.

For next couple of months, I will be focusing more on cloud technologies, starting with Virtualization to Cloudification. These articles will be my notes for future reference and can be supporting/guiding articles for fellow learners and professionals.

Using JSON dataype for database queries – PostgreSQL

I was torn between using relational database and document database like mongodb, elastic search etc for certain requirement. In fact I started learning elasticsearch but getting some of the tasks done with elasticseach is very painful compared to how easy its done in relational database. This was impacting my timeline and I was spending way too much time doing troubleshooting.

This is when I came across json datatype in PostgreSQL. This is combinations of easiness of relational database and document query capabilities of elastic seach. Of course this would be bit slower than elasticseach (I have not benchmarked the performance, buts its guess) but it is OK as I am in prototype phase, I just need to validate my business case, once it flies, I will make a switch be relevant document database.

Meanwhile, lets us see how to work with JSON datatype of PostgreSQL

select clause

Because -> operator returns a JSON object, you can chain it with the operator ->> to retrieve a it in text format.

SELECT id, productid,  productdetails -> 'productBaseInfoV1' ->> 'productUrl' AS customer
FROM public.ecomm_productdetails;
SELECT id, productid,  productdetails -> 'productBaseInfoV1' -> 'productUrl' AS customer
FROM public.ecomm_productdetails;

where clause

Please note you always need to use final element with ->> since this needs to be matched with text field.

SELECT id, productid,  productdetails
FROM public.ecomm_productdetails
WHERE productdetails -> 'productBaseInfoV1' ->> 'title' = 'Apple iPhone 6 (Grey, 128 GB)';


SELECT id, productid,  productdetails
FROM public.ecomm_productdetails
WHERE productdetails -> 'productBaseInfoV1' ->> 'title' like 'OPPO%';

Troubleshooting django web application deployed using NGINX, uWSGI errors

For any type of error, first check if UWSGI is running fine ?

myusername@ubuntu-512:~$ sudo service uwsgi status
● uwsgi.service - uWSGI Emperor
   Loaded: loaded (/etc/systemd/system/uwsgi.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-09-09 08:39:27 IST; 9min ago
  Process: 1451 ExecStartPre=/bin/bash -c mkdir -p /run/uwsgi; chown myusername:www-data /run/uwsgi (code=exited, status=0/SUCCESS)
 Main PID: 1530 (uwsgi)
   Status: "The Emperor is governing 3 vassals"
    Tasks: 7
   Memory: 142.7M
      CPU: 1.874s
   CGroup: /system.slice/uwsgi.service
           ├─1530 /usr/local/bin/uwsgi --emperor /home/myusername/uwsgi/sites
           ├─1564 /usr/local/bin/uwsgi --ini website1.ini
           ├─1565 /usr/local/bin/uwsgi --ini website2.ini
           ├─1625 /usr/local/bin/uwsgi --ini website3.ini
           ├─2022 /usr/local/bin/uwsgi --ini website3.ini
           ├─2033 /usr/local/bin/uwsgi --ini website2.ini
           └─2035 /usr/local/bin/uwsgi --ini website1.ini

Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: Input to details english quote We-accept-the-love-we-think-we-deserve.
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: context: success
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: context: post_type quote
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: [pid: 2033|app: 0|req: 7/7] () {40 vars in 768 bytes} [Wed Sep  9 03:14:29 2020] GET
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: [pid: 2035|app: 0|req: 1/1] () {38 vars in 515 bytes} [Wed Sep  9 03:14:29 2020] GET /
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: announcing my loyalty to the Emperor...
Sep 09 08:44:29 ubuntu-512 uwsgi[1530]: Wed Sep  9 08:44:29 2020 - [emperor] vassal website1.ini is now loyal
Sep 09 08:46:03 ubuntu-512 uwsgi[1530]: [pid: 2035|app: 0|req: 2/2] () {40 vars in 748 bytes} [Wed Sep  9 03:16:03 2020] GET 
Sep 09 08:49:09 ubuntu-512 uwsgi[1530]: --- no python application found, check your startup logs for errors ---
Sep 09 08:49:09 ubuntu-512 uwsgi[1530]: [pid: 2022|app: -1|req: -1/5] () {46 vars in 692 bytes} [Wed Sep  9 03:19:09 2020] GET

You might see any error here. In above log its showing error as --- no python application found, check your startup logs for errors ---. In this case, we need to check the wsgi.py file settings. Your wsgi file should look something like below. Please not the lines where I have added comment #added this

import os
import sys #added this

from django.core.wsgi import get_wsgi_application

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) #added this
sys.path.append(BASE_DIR) #added this

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'yourapp.settings')

application = get_wsgi_application()


Another way to trouble shoot is to check if .sock is created at following location


If nothing gives any clue, try to see if you can run application manually using following command

sudo uwsgi --http --home /var/www/mywebsite1/venvft --chdir /var/www/mywebsite1/mysite07 --wsgi-file /var/www/mywebsite1/mysite07/wsgi.py

You might get different error here. e.g. some library is not installed. I got an error that model requests not found. I installed it in virtualenv as below

$virtualenv venvft
created virtual environment CPython3.6.4.final.0-64 in 453ms
creator CPython3Posix(dest=/var/www/yourapp/venvft, clear=False, global=False)
seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/user/.local/share/virtualenv/seed-app-data/v1.0.1)
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator

$source venvft/bin/activate
$pip install django 
$pip install psycopg2
$pip install requests

Another error I got is as below

*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72920 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
Traceback (most recent call last):
  File "/var/www/website3/mysite07/wsgi.py", line 19, in 
    application = get_wsgi_application()
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/__init__.py", line 24, in setup
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/apps/registry.py", line 91, in populate
    app_config = AppConfig.create(entry)
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/apps/config.py", line 116, in create
    mod = import_module(mod_path)
  File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "", line 994, in _gcd_import
  File "", line 971, in _find_and_load
  File "", line 955, in _find_and_load_unlocked
  File "", line 665, in _load_unlocked
  File "", line 678, in exec_module
  File "", line 219, in _call_with_frames_removed
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/contrib/auth/apps.py", line 8, in 
    from .checks import check_models_permissions, check_user_model
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/contrib/auth/checks.py", line 8, in 
    from .management import _get_builtin_permissions
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/contrib/auth/management/__init__.py", line 9, in 
    from django.contrib.contenttypes.management import create_contenttypes
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/contrib/contenttypes/management/__init__.py", line 2, in 
    from django.db import (
  File "/var/www/website3/venvft/lib/python3.6/site-packages/django/db/migrations/__init__.py", line 1, in 
    from .migration import Migration, swappable_dependency  # NOQA
ModuleNotFoundError: No module named 'django.db.migrations.migration'
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 2699, cores: 1)

Here if you see in details, error is with the migrations. In above screen error is ModuleNotFoundError: No module named 'django.db.migrations.migration' common solution is to reinstall django. Here fortunately we are using virtualenv so I can remove and reinstall it without impacting other applications

$rm -rf venvft
$virtualenv venvft
created virtual environment CPython3.6.4.final.0-64 in 453ms
creator CPython3Posix(dest=/var/www/yourapp/venvft, clear=False, global=False)
seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/user/.local/share/virtualenv/seed-app-data/v1.0.1)
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator

$source venvft/bin/activate
$pip install django 
$pip install psycopg2
pip install requests

once its done I rerun the following

sudo uwsgi --http --home /var/www/mywebsite1/venvft --chdir /var/www/mywebsite1/mysite07 --wsgi-file /var/www/mywebsite1/mysite07/wsgi.py

Here again I got below error

sFatal Python error: Py_Initialize: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'

Current thread 0x00007f7b5b79d700 (most recent call first):
Aborted (core dumped)

however when I tried to access my website, it was working properly. so for now, I decided to ignore the error

django QuerySet examples

If you want to fetch whole table or if you want to fetch n number of rows (lets try 30 in current case)

context = MyModel.objects.all()
context = MyModel.objects.all()[:30]
## You can iterate over the output as well 
for e in MyModel.objects.all():

if you want to fetch data in specific order. – sign indicates descending

context = MyModel.objects.all().order_by('pub_date')
context = MyModel.objects.all().order_by('-pub_date')


#like clause


 context = MyModel.objects.exclude(pub_date__gt=datetime.date(2005, 1, 3)).exclude(headline='Hello')

if you want to fetch distinct values. Please note you can combineorder_by and distinct

context = MyModel.objects.all().distinct('blog')
context = MyModel.objects.all().order_by('pub_date').distinct('blog')



Returns a QuerySet that returns dictionaries, rather than model instances, when used as an iterable.

context = MyModel.objects.all().distinct('blog').values()


Calling one python script from another

Ideally should should import requited python file using import into another and call required function from other programs but there could be some instances where you would need to trigger one python script from another.

This  is fairly simple.

Lets say I have following two program located in same folder one.py and one_sub.py. Let us try calling one_sub.py from one.py is fairly simple


import os

os.system('python3.6 ' + 'one_sub.py')

This code will trigger one_sub.py.

Passing parameters to calling file

This needs minor changes. Whatever you need to pass, just mention that value or variable after a space.

import os

os.system('python3.6 ' + 'one_sub.py 11 ')


import sys
print("---This is inside script 2")

input_value = int(sys.argv[1])
print(" 0 ", sys.argv[0], )
print(" 1 ", sys.argv[1], type(sys.argv[0]))

Please note, 0th parameter is always the script name, you can pass multiple parameters.

Also, received files are always string, you need to change to required datatype using datatype conversion operators e.g. int(sys.argv[1])

Trigger file located at different directory

import os

file_path = '/home/user/code/quant/source/library/'
os.system('python3.6 ' + file_path + 'one_sub.py 11 22')

Please note the training '/' in file_path

How to insert data from pandas to PostgreSQL

When we have done our data analysis using pandas and now need to store this analysis, we can use to_csv option. However if data is too big, it make sense to store this in database.
Let us have a look at two simple methods to store data into PostgreSQL database.

Using sqlalchemy


import pandas as pd
from sqlalchemy import create_engine

df =pd.read_csv('input_file.csv',parse_dates=['timestamp'],index_col=0)

engine = create_engine('postgresql://admin:admin@localhost:5432/ibnse')
df.to_sql('stk_1min1', engine,if_exists='append')

Using psycopg2


pandas as pd
import psycopg2
conn = psycopg2.connect(database='dbname', user="user", password="password", host="", port="5432")
cur = conn.cursor()

df =pd.read_csv('input_file.csv',parse_dates=['timestamp'],index_col=0)
for index, row in df.iterrows():
    insertdata =  "('"+str(index)+ "','"+row[0]+"','"+str(row[1])+"','"+str(row[2])+"','"+str(row[3])+"','"+str(row[4])+"','"+str(row[5])+"','"+str(row[6])+"','"+str(row[7])+"','"+str(row[8])+"')"
    print("insertdata :",insertdata)

        cur.execute("INSERT INTO stk_1min1 values "+insertdata)
        print( "row inserted:", insertdata)
    except psycopg2.IntegrityError:
        print( "Row already exist ")
    except Exception as e:
        print( "some insert error:", e, "ins: ", insertdata)

Few points while using sqlalchemy

  • If table does not exists, it will get created.
  • If table exists and you want to append, you need to use if_exists='append',  Its wise choice to use this option in most of the cases.
  • When you use sqlalchemy, whole pandas dataframe will not be inserted even if you get unique index error for one single record.

How to read data from PostgreSQL to Pandas DataFrame

Usually for training and testing, we create pandas DataFrame from csv file but when we are working while large dataset or working with database stored in database, we need a way to fetch data into pandas DataFrame directly from database. In this article we will have a look at two methods for doing it.

Using sqlalchemy


import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('postgresql://user:password@localhost:5432/dbName')

df = pd.read_sql_query("SELECT * FROM public.stk_1min1 where symbol = 'TCS'",con=engine)


You can write query before using pd.read_sql_query function

import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('postgresql://user:password@localhost:5432/dbName')

stk = 'TCS'
query = "SELECT * FROM public.stk_1min1 where symbol = "+"'"+stk+"'"

df = pd.read_sql_query(query,con=engine)

Using psycopg2


import pandas as pd
import psycopg2
conn = psycopg2.connect(database='dbname', user="user", password="password", host="", port="5432")

df = pd.read_sql_query("SELECT * FROM public.stk_1min1 where symbol = 'TCS'",con=conn,index_col=['index'])


Similar to sqlalchemy example, you can write query before using read_sql_query function.

Getting started with Interactive Brokers Python API on Ubuntu

Interactive brokers offer one of the lost latest Trading desktop application and best part is that it offers API access to most of the worlds leading stock exchanges. For India, Interactive brokers provide free access to NSE data (monthly account maintenance charges of INR 200) which is nothing if you compare pricing by main competitor zerodha ( Streaming INR 2000 + Historic INR 2000).

In this article, let us have a look at how to get started with Interactive Brokers API access.

Step#1  Open an account

First and foremost, you need to have funded account with Interactive Brokers. They are opening account virtually provided you have all relevant documents scanned copy.

If you need an access to market data you need to keep $500 or its equivalent in your account.  in INR, you can transfer ~40,000/- to be on safer side.

Step#2 Activate Market data.

Its takes about a week for your documents to be verified and account activation. Once your account is activated, you need to suscribe to market data using following path.

Login to web portal –> User Settings –> Market Data Suscription (Click on gear sign>–> select required exchanges and market data type–> confirm.

And you are all set.

Please note for NSE, Equity and Futures and Options market data is separate, so you need to sele

Step#3 Download TWS

You can download TWS (Traders Workstation) from here. This is one of the most advanced if you compare it with other competitors.

You might ask, I need API access, why do I need TWS ? Well, Interactive Brokers API need TWS to be installed on local machine. In a way, API piggy backs on the TWS. More on this later

Step#4 Download API Libraries

You can download API libraries from here. You can choose stable version.

Step#5 Basic Setup

Let us downloaded version is “twsapi_macunix.979.01.zip”. This might be different for you based on latest version released by IBKR

$cd ~/Downloads
$sudo unzip twsapi_macunix.979.01.zip -d $HOME/
$cd ~/IBJts
~/IBJts$ls -la
~/IBJts$ ls -la
total 68
drwxr-xr-x  5 userid root            4096 May 29 09:39 .
drwxr-xr-x 36 userid userid  4096 May 29 11:16 ..
-rw-r--r--  1 userid root              21 Feb  6 01:04 API_VersionNum.txt
drwxr-xr-x  2 userid userid  4096 May 28 22:38 log
drwxr-xr-x  5 userid root            4096 Feb  6 01:07 samples
drwxr-xr-x  5 userid root            4096 Feb  6 01:07 source

You can find code inside source and sample folder.

Step#6  Configure TWS

As mentioned in step#3, you need TWS. You need to enable API access in TWS by making following changes.

Go to Edit -> Global Configuration -> API -> Settings and make sure the “Enable ActiveX and Socket Clients” option is activated as shown below:

Live Account 7496
Paper Trading 7497

Step#7 Sample Code

Go to source–>pythonclient–>tests

Here you will find many sample code for various tasks

Important note:

Whenever you run API, you need to make sure TWS is running. else you will get error like below

~/IBJts$ /usr/bin/python3 "/home/conquistadorjd/IBJts/source/pythonclient/working/check_connection v 4.0.py"
ERROR -1 502 Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket EClients" 
is enabled and connection port is the same as "Socket Port" on the 
TWS "Edit->Global Configuration...->API->Settings" menu. Live Trading ports: 
TWS: 7496; IB Gateway: 4001. Simulated Trading ports for new installations 
of version 954.1 or newer:  TWS: 7497; IB Gateway: 4002

Step#8 Sample code for you first program

from ibapi.client import EClient
from ibapi.wrapper import EWrapper  

class IBapi(EWrapper, EClient):
     def __init__(self):
         EClient.__init__(self, self) 

app = IBapi()
app.connect('', 7496, 1)

#Uncomment this section if unable to connect
#and to prevent errors on a reconnect
import time


How to install jupyter notebook Ubuntu

similar to any other package, before installing anything on Ubuntu run following command

$sudo apt-get update

Once system is updated to latest packages, you can install jupyter using following command

$sudo python3.6 -m pip install jupyter
WARNING: The directory '/home/username/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting jupyter
Downloading jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting notebook
Downloading notebook-6.0.3-py3-none-any.whl (9.7 MB)
|████████████████████████████████| 9.7 MB 76 kB/s 
Collecting ipywidgets
Downloading ipywidgets-7.5.1-py2.py3-none-any.whl (121 kB)
|████████████████████████████████| 121 kB 294 kB/s 
Collecting nbconvert
Downloading nbconvert-5.6.1-py2.py3-none-any.whl (455 kB)
|████████████████████████████████| 455 kB 192 kB/s 
Collecting ipykernel
Downloading ipykernel-5.2.1-py3-none-any.whl (118 kB)
|████████████████████████████████| 118 kB 75 kB/s 
Collecting qtconsole
Downloading qtconsole-4.7.3-py2.py3-none-any.whl (117 kB)
|████████████████████████████████| 117 kB 126 kB/s 
Collecting jupyter-console
Downloading jupyter_console-6.1.0-py2.py3-none-any.whl (21 kB)
Collecting terminado>=0.8.1
Downloading terminado-0.8.3-py2.py3-none-any.whl (33 kB)
Collecting jupyter-client>=5.3.4
Downloading jupyter_client-6.1.3-py3-none-any.whl (106 kB)
|████████████████████████████████| 106 kB 139 kB/s 
Collecting Send2Trash
Downloading Send2Trash-1.5.0-py3-none-any.whl (12 kB)
Collecting ipython-genutils
Downloading ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting jinja2
Downloading Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
|████████████████████████████████| 125 kB 263 kB/s 
Collecting nbformat
Downloading nbformat-5.0.6-py3-none-any.whl (170 kB)
|████████████████████████████████| 170 kB 382 kB/s 
Collecting prometheus-client
Downloading prometheus_client-0.7.1.tar.gz (38 kB)
Collecting tornado>=5.0
Downloading tornado-6.0.4.tar.gz (496 kB)
|████████████████████████████████| 496 kB 99 kB/s 
Collecting traitlets>=4.2.1
Downloading traitlets-4.3.3-py2.py3-none-any.whl (75 kB)
|████████████████████████████████| 75 kB 82 kB/s 
Collecting pyzmq>=17
Downloading pyzmq-19.0.0-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 71 kB/s 
Collecting jupyter-core>=4.6.1
Downloading jupyter_core-4.6.3-py2.py3-none-any.whl (83 kB)
|████████████████████████████████| 83 kB 212 kB/s 
Collecting widgetsnbextension~=3.5.0
Downloading widgetsnbextension-3.5.1-py2.py3-none-any.whl (2.2 MB)
|████████████████████████████████| 2.2 MB 178 kB/s 
Collecting ipython>=4.0.0; python_version >= "3.3"
Downloading ipython-7.13.0-py3-none-any.whl (780 kB)
|████████████████████████████████| 780 kB 203 kB/s 
Collecting bleach
Downloading bleach-3.1.5-py2.py3-none-any.whl (151 kB)
|████████████████████████████████| 151 kB 290 kB/s 
Collecting pygments
Downloading Pygments-2.6.1-py3-none-any.whl (914 kB)
|████████████████████████████████| 914 kB 130 kB/s 
Collecting mistune<2,>=0.8.1
Downloading mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting pandocfilters>=1.4.1
Downloading pandocfilters-1.4.2.tar.gz (14 kB)
Collecting testpath
Downloading testpath-0.4.4-py2.py3-none-any.whl (163 kB)
|████████████████████████████████| 163 kB 239 kB/s 
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)",)': /simple/defusedxml/
Collecting defusedxml
Downloading defusedxml-0.6.0-py2.py3-none-any.whl (23 kB)
Collecting entrypoints>=0.2.2
Downloading entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting qtpy
Downloading QtPy-1.9.0-py2.py3-none-any.whl (54 kB)
|████████████████████████████████| 54 kB 36 kB/s 
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
Downloading prompt_toolkit-3.0.5-py3-none-any.whl (351 kB)
|████████████████████████████████| 351 kB 62 kB/s 
Collecting ptyprocess; os_name != "nt"
Downloading ptyprocess-0.6.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: python-dateutil>=2.1 in /home/username/.local/lib/python3.6/site-packages (from jupyter-client>=5.3.4->notebook->jupyter) (2.8.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/lib/python3/dist-packages (from jinja2->notebook->jupyter) (1.0)
Collecting jsonschema!=2.5.0,>=2.4
Downloading jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
|████████████████████████████████| 56 kB 106 kB/s 
Requirement already satisfied: six in /home/username/.local/lib/python3.6/site-packages (from traitlets>=4.2.1->notebook->jupyter) (1.14.0)
Collecting decorator
Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/lib/python3/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter) (4.2.1)
Collecting pickleshare
Downloading pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Requirement already satisfied: setuptools>=18.5 in /home/username/.local/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter) (45.1.0)
Collecting jedi>=0.10
Downloading jedi-0.17.0-py2.py3-none-any.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 98 kB/s 
Collecting backcall
Downloading backcall-0.1.0.tar.gz (9.7 kB)
Collecting packaging
Downloading packaging-20.3-py2.py3-none-any.whl (37 kB)
Collecting webencodings
Downloading webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting wcwidth
Downloading wcwidth-0.1.9-py2.py3-none-any.whl (19 kB)
Collecting attrs>=17.4.0
Downloading attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook->jupyter) (1.6.0)
Collecting pyrsistent>=0.14.0
Downloading pyrsistent-0.16.0.tar.gz (108 kB)
|████████████████████████████████| 108 kB 202 kB/s 
Collecting parso>=0.7.0
Downloading parso-0.7.0-py2.py3-none-any.whl (100 kB)
|████████████████████████████████| 100 kB 243 kB/s 
Collecting pyparsing>=2.0.2
Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
|████████████████████████████████| 67 kB 75 kB/s 
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->notebook->jupyter) (3.1.0)
Building wheels for collected packages: prometheus-client, tornado, pandocfilters, backcall, pyrsistent
Building wheel for prometheus-client (setup.py) ... done
Created wheel for prometheus-client: filename=prometheus_client-0.7.1-py3-none-any.whl size=41402 sha256=1877caa0c6edd08a07b1fa309bfbdda584bd76e844bdd201b86be6d7d22cafbe
Stored in directory: /tmp/pip-ephem-wheel-cache-8bi_3gi1/wheels/1d/4a/79/a3ad3f74b3495b4555359375ca33ad7b64e77f8b7a53c8894f
Building wheel for tornado (setup.py) ... done
Created wheel for tornado: filename=tornado-6.0.4-cp36-cp36m-linux_x86_64.whl size=427632 sha256=d6aafe6d2604804cf85b680683a70cef28b97f1eb209117421a48c1aa9ab7c68
Stored in directory: /tmp/pip-ephem-wheel-cache-8bi_3gi1/wheels/37/a7/db/2d592e44029ef817f3ef63ea991db34191cebaef087a96f505
Building wheel for pandocfilters (setup.py) ... done
Created wheel for pandocfilters: filename=pandocfilters-1.4.2-py3-none-any.whl size=7855 sha256=422d885d227893b61571ea6ff8f118801dc5949559cdb74af149b10804d8ebbc
Stored in directory: /tmp/pip-ephem-wheel-cache-8bi_3gi1/wheels/46/c4/40/718c6fd14c2129ccaee10e0cf03ef6c4d01d98cad5dbbfda38
Building wheel for backcall (setup.py) ... done
Created wheel for backcall: filename=backcall-0.1.0-py3-none-any.whl size=10412 sha256=dc8f81fbfca6f8b8b1321046de42f718a1f7b1cc1ce997bbc18228a2c9d3c0ff
Stored in directory: /tmp/pip-ephem-wheel-cache-8bi_3gi1/wheels/b4/cb/f1/d142b3bb45d488612cf3943d8a1db090eb95e6687045ba61d1
Building wheel for pyrsistent (setup.py) ... done
Created wheel for pyrsistent: filename=pyrsistent-0.16.0-cp36-cp36m-linux_x86_64.whl size=97738 sha256=f6ffcf09823e6aadbd67a5b06ba106699614a0b3add6f7a0ba92a1e6fa2ea49f
Stored in directory: /tmp/pip-ephem-wheel-cache-8bi_3gi1/wheels/d1/8a/1c/32ab9017418a2c64e4fbaf503c08648bed2f8eb311b869a464
Successfully built prometheus-client tornado pandocfilters backcall pyrsistent
Installing collected packages: ptyprocess, tornado, terminado, ipython-genutils, decorator, traitlets, jupyter-core, pyzmq, jupyter-client, Send2Trash, pyparsing, packaging, webencodings, bleach, pygments, mistune, jinja2, pandocfilters, attrs, pyrsistent, jsonschema, nbformat, testpath, defusedxml, entrypoints, nbconvert, wcwidth, prompt-toolkit, pickleshare, parso, jedi, backcall, ipython, ipykernel, prometheus-client, notebook, widgetsnbextension, ipywidgets, qtpy, qtconsole, jupyter-console, jupyter
Successfully installed Send2Trash-1.5.0 attrs-19.3.0 backcall-0.1.0 bleach-3.1.5 decorator-4.4.2 defusedxml-0.6.0 entrypoints-0.3 ipykernel-5.2.1 ipython-7.13.0 ipython-genutils-0.2.0 ipywidgets-7.5.1 jedi-0.17.0 jinja2-2.11.2 jsonschema-3.2.0 jupyter-1.0.0 jupyter-client-6.1.3 jupyter-console-6.1.0 jupyter-core-4.6.3 mistune-0.8.4 nbconvert-5.6.1 nbformat-5.0.6 notebook-6.0.3 packaging-20.3 pandocfilters-1.4.2 parso-0.7.0 pickleshare-0.7.5 prometheus-client-0.7.1 prompt-toolkit-3.0.5 ptyprocess-0.6.0 pygments-2.6.1 pyparsing-2.4.7 pyrsistent-0.16.0 pyzmq-19.0.0 qtconsole-4.7.3 qtpy-1.9.0 terminado-0.8.3 testpath-0.4.4 tornado-6.0.4 traitlets-4.3.3 wcwidth-0.1.9 webencodings-0.5.1 widgetsnbextension-3.5.1


Yeah, this is pretty long list of packages required for jupyter. !

Once installation is completed, you can start jupyter using following command

$ jupyter notebook --allow-root
[I 20:44:06.979 NotebookApp] The port 8888 is already in use, trying another port.
[I 20:44:06.982 NotebookApp] Serving notebooks from local directory: /home/username/code/quant
[I 20:44:06.982 NotebookApp] The Jupyter Notebook is running at:
[I 20:44:06.982 NotebookApp] http://localhost:8889/?token=94cc0863965bf1d2751ad8a7a4b26d08f1b9f0be4f380c9a
[I 20:44:06.982 NotebookApp] or
[I 20:44:06.982 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 20:44:06.987 NotebookApp]

To access the notebook, open this file in a browser:
Or copy and paste one of these URLs:
[W 20:44:09.233 NotebookApp] 404 GET /api/kernels/7f717081-a79b-4170-b5be-c7e548ea28c7/channels?session_id=a53b179c3d2c4deda89418767c9bb6b9 ( Kernel does not exist: 7f717081-a79b-4170-b5be-c7e548ea28c7
[W 20:44:09.249 NotebookApp] 404 GET /api/kernels/7f717081-a79b-4170-b5be-c7e548ea28c7/channels?session_id=a53b179c3d2c4deda89418767c9bb6b9 ( 19.55ms referer=None

and then you can access your jupyter notebooks at http://localhost:8889
Here is how it looks

Hope this helps.

Using Slugify with Django blog to create slug or url

In this article we will have a look at how to use Django utility to create slug field automatically and most importantly, you can create slugs in languages other than English too.

For creating blog application, please refer to post how to create Blog app using Django.

Once you create blog application, you admin screen will look something like below.

Here if you see carefully, slug is user input field, now we want to make it auto created field by making some changes into models.py

from django.db import models
from django.contrib.auth.models import User
from django.utils.text import slugify                                # add this

class Post(models.Model):
    title = models.CharField(max_length=200, unique=True)
    slug = models.SlugField(max_length=200, unique=True,editable=False)  # Note the changes here, editable is false.
    author_local = models.ForeignKey(User, on_delete= models.CASCADE,related_name='blog_posts',default="admin")
    updated_on = models.DateTimeField(auto_now= True)
    content = models.TextField()
    created_on = models.DateTimeField(auto_now_add=True)
    status = models.IntegerField(choices=STATUS, default=0)
    post_type = models.CharField(max_length=15,choices=POST_CHOICES,default=None,blank=True)
    category = models.CharField(max_length=50,default=None,blank=True)
    featured_image = models.ImageField(upload_to='img', blank=True, null=True)

    class Meta:
        ordering = ['-created_on']

    def save(self, *args, **kwargs):                                  # add this
        self.slug = slugify(self.title, allow_unicode=True)           # add this
        super().save(*args, **kwargs)                                 # add this

    def __str__(self):
        return self.title

Since you are making changes in models.py, you need to run following commands


$python3.6 manage.py makemigrations yourappname
$python3.6 manage.py migrate


Now access admin again, it will look like below and once you save the post, slug is automatically created.

Since we are using Unicode, it works for Devanagari script (Marathi and Hindi), I did not test for other languages but it would work for other languages too.


You might be wondering why content text field has options like WordPress editor in my screenshot. Please refer to post How to add Summernote WYSIWYG Editor in Django.