Gliimly Single-Page Documentation for version 101

This page contains all of Gliimly documentation topics combined into one. It may be easier to search.

123-hello-world
about-gliim
after-handler
application-setup
before-handler
begin-handler
begin-transaction
break-loop
call-extended
call-handler
call-remote
call-web
CGI
Client-API
close-file
code-blocks
command-line
commit-transaction
connect-apache-tcp-socket
connect-apache-unix-socket
connect-haproxy-tcp-socket
connect-nginx-tcp-socket
connect-nginx-unix-socket
continue-loop
copy-file
copy-string
count-substring
current-row
database-config-file
db-error
debugging
decode-base64
decode-hex
decode-url
decode-web
decrypt-data
delete-cookie
delete-fifo
delete-file
delete-index
delete-lifo
delete-list
delete-string
derive-key
directories
documentation
do-once
encode-base64
encode-hex
encode-url
encode-web
encrypt-data
error-code
error-handling
exec-program
exit-handler
extended-mode
file-position
file-storage
file-uploading
finish-output
flush-output
get-app
get-array
get-cookie
get-index
get-list
get-message
get-param
get-req
get-sys
get-time
gg
handler-status
hash-string
hmac-string
if-defined
if-true
inline-code
install-arch
install-debian
install-fedora
install
install-opensuse
json-doc
license
lock-file
lower-string
mariadb-database
match-regex
memory-handling
mgrg
new-array
new-fifo
new-index
new-lifo
new-list
new-message
new-remote
number-expressions
number-string
open-file
out-header
output-statement
pause-program
pf-out
pf-url
pf-web
p-num
position-list
postgresql-database
p-out
p-path
p-source-file
p-source-line
purge-array
purge-fifo
purge-index
purge-lifo
purge-list
p-url
p-web
random-crypto
random-string
read-array
read-fifo
read-file
read-index
read-json
read-lifo
read-line
read-list
read-message
read-remote
read-split
rename-file
replace-string
report-error
request-body
request
resize-array
rewind-fifo
rewind-lifo
rollback-transaction
run-query
run-remote
SELinux
SEMI
send-file
Server-API
service
set-bool
set-cookie
set-number
set-param
set-string
silent-header
split-string
sqlite-database
start-loop
statements
stat-file
string-length
string-number
syntax-highlighting
temporary-file
text-utf8
trace-run
trim-string
uninstall
uniq-file
unlock-file
unused-var
upper-string
use-cursor
utf8-text
variable-scope
write-array
write-fifo
write-file
write-index
write-lifo
write-list
write-message
write-string
 123 hello world

Step 1: Install Gliimly
First install Gliimly.
Step 2: Build it
Create Hello World source file (hello.gliim) in a new directory; note it's all one bash command:
echo 'begin-handler /hello public
    @Hello World!
end-handler' > hello.gliim

Create Hello World application:
sudo mgrg -i -u $(whoami) helloworld

Make Hello World application:
gg -q

Step 3: Run it
You can run Hello World both as a service and from command line:
Expected result
Hello World!

See also
Quick start
123-hello-world  
See all
documentation
 About gliim

Modeling language
Gliimly is a very high-level modeling language. It's about solving problems by modeling the solution and connecting the components to create high-performance, low-footprint executables; and not about managing memory, manipulating bits and bytes or writing complex code.
Syntax matters
Syntax of a language matters, not just for writing code now, but for someone else reading it years later. Gliimly language is designed to be intuitive, easy and rapid in use on both ends, and to be close to the way humans are wired, rather than machines.
Service oriented
A Gliimly program works as a service provider, meaning it handles service requests by providing a reply. It can be either a service or a command-line program that processes GET, POST, PUT, PATCH, DELETE or any other HTTP requests.

The URL for a request must state the application name, and a also request name which is the source file handling it. So, "/app-name/my-request" means that application name is "app-name" and that "request_name.gliim" file will implement a request handler. A request executes in this order:
Running as a service
A Gliimly service is served by either
Each Gliimly service process handles one request at a time, and all such processes work in parallel. This means you do not need to worry about thread-safety with Gliimly. Server processes generally stay up across any number of requests, increasing response time. The balance between the number of processes and the memory usage during high request loads can be achieved with adaptive feature of mgrg, Gliimly's service process manager.

A service can be requested by:
With call-remote, you can execute remote requests in parallel, and get results, error messages and exit status in a single statement. This makes it easy to distribute and parallelize your application logic and/or build application tiers on a local or any number of remote machines, without having to write any multi-threaded code.
Command-line program
A command-line program handles a single request before it exits. This may be suitable for batch jobs, for use in shell scripts, for testing/mocking, as well as any other situation where it is more useful or convenient to execute a command-line program. Note that a command-line program can double as CGI (Common Gateway Interface) as well.
Usage
Gliimly services and command-line programs can implement most back-end application layers, including
Language
Gliimly programming language is memory-safe, meaning it will prevent you from accidentally overwriting memory or freeing it when it shouldn't be. Gliimly's memory-handling is not limited to just memory safety; it also includes automatic freeing of memory at the end of a request, preventing memory leaks which can be fatal to long running processes. Similarly, files open with file-handling statements are automatically closed at the end of each request, serving the same purpose.
Types
Gliimly is a strongly-typed language, with only three primitive types (numbers, strings and booleans) and a number of structured types (message, split-string, array, index, index-cursor, fifo, lifo, list, file and service). Gliimly is a declarative language, with a few lines of code implementing large functionalities. Gliimly is also very simple - it does not even have expressions! That's because it's designed to achieve application goals with less coding.

The number type is a signed 64-bit integer. The boolean type evaluates to true (non-zero) or false (zero). The string type evaluates to any sequence of bytes (binary or text) that is always trailed with a null character regardless, which is not counted in string's length. All constants follow C rules of formatting.
Statements
Gliimly statements are designed for safety, ease of use, and ability to write stable code. Most statements typically perform common complex tasks with options to easily customize them; such options are compile-time whenever possible, increasing run-time performance.
Variables, scope
A variable is created the first time it's encountered in any given scope, and is never created again in the same or inner scopes, which avoids common bugs involving more than one variable with the same name in related scopes. You can still of course create variables with the same name in unrelated scopes.

Some structured types (array, index, list) as well as primitive types (numbers, strings and booleans) can be created with process-scope, meaning their value persists throughout any requests served by the same process. This is useful for making services that allow keeping and fast quering of data (such as caches).

Numbers and booleans are assigned by value, while strings are assigned by reference (for obvious reason to avoid unnecessary copying).
Infrastructure
Gliimly includes request-processing and all the necessary infrastructure, such as for process management, files, networking, service protocols, database, string processing etc.
Performance
Gliimly applications are high-performance native executables by design, hence absolutely no byte-code, interpreters and similar. Since Gliimly is declarative, just a few statements are needed to implement lots of functionality. These statements are implemented in pure C, and are not slowed down by memory checks as they are safe internally by implementation. Only developer-facing Gliimly code needs additional logic to enforce memory safety, and that's a very small part of overall run-time cost. This means Gliimly can truly be memory-safe and high-performance at the same time.
Database access
Gliimly provides access to a number of popular databases, such as MariaDB/mySQL, PostgreSQL and SQLite. (see database-config-file):
Proven libraries
Gliimly uses well-known and widely used Free Open Source libraries like cURL, OpenSSL, crypto, FastCGI, standard database-connectivity libraries from MariaDB, PostgreSQL, SQLite etc., for compliance, performance and reliability.
Names of objects
Do not use object names (such as variables and request names) that start with "_gg_", "_gg_", "gg_" or "gg_" (including upper-case variations) as those are reserved by Gliimly.
See also
General
about-gliim  
directories  
SELinux  
See all
documentation
 After handler

Purpose: Execute your code after a request is handled.

 after-handler
 ...
 end-after-handler

Every Gliimly request goes through a request dispatcher (see request()). In order to specify your code to execute after a request is handled, create source file "after-handler.gliim" and implement a handler that starts with "after-handler" and ends with "end-after-handler", which will be automatically picked up and compiled with your application.

If no request executes (for example if your application does not handle a given request), after-handler handler does not execute either. If you use exit-handler to exit current request handling, after-handler handler still executes.
Examples
Here is a simple implementation of after-handler handler that just outputs "Hi there!!":
 after-handler
      @Hi there!!
 end-after-handler

See also
Service processing
after-handler  
before-handler  
begin-handler  
call-handler  
See all
documentation
 Application setup

Initialize application
A Gliimly application must be initialized first. This means creating a directory structure owned by application owner, which can be any Operating System user. To initialize application <app name> while logged-in as application owner:
sudo mgrg -i -u $(whoami) <app name>

Setup database(s)
If your application does not use database(s), you can skip this part.

You can setup your database(s) in any way you see fit, and this includes creating the database objects (such as tables or indexes) used by your application; all Gliimly needs to know is the connection parameters, which include database login information (but can include other things as well). For each database in use, you must provide a database-config-file in the same directory as your Gliimly source code. This file contains the database connection parameters - these parameters are database-specific. For example, if your code has statements like:
 run-query @mydb = ...

 //or

 begin-transaction @sales_db

then you must have files "mydb" and "sales_db" present. For example, MariaDB config file might look like:
[client]
user=gliimuser
password=pwd
database=gliimdb
protocol=TCP
host=127.0.0.1
port=3306

or for PostgreSQL:
user=myuser password=mypwd dbname=mydb

Make application
To compile and link the application that doesn't use database(s):
gg -q

When you have database(s) in use, for instance assuming in above example that "mydb" is MariaDB database, "sales_db" is PostgreSQL, and "contacts" is SQLite database:
gg -q --db="mariadb:mydb postgres:sales_db sqlite:contacts"

See gg for more options.
Start application
Stop the application first in case it was running, then start the application - for example:
mgrg -m quit <app name>
mgrg -w 3 <app name>

See mgrg for more details.
Running application
You can run your application as service, CGI or command-line.
See also
Running application
application-setup  
CGI  
command-line  
service  
See all
documentation

Create a directory for your Hello World application and then switch to it:
mkdir hello-world
cd hello-world

Create the application:
sudo mgrg -i -u $(whoami) hello

Create a file hello-world.gliim:
vim hello-world.gliim

and copy this code to it:
 begin-handler /hello-world public
     get-param name
     @This is Hello World from <<p-out name>>
 end-handler

This service takes input parameter "name" (see get-param), and then outputs it along with a greeting message (see output-statement).

Compile the application:
gg -q

Run the application by executing this service from command line. Note passing the input parameter "name" with value "Mike":
gg -r --req="/hello-world/name=Mike" --exec --silent-header

The output is:
This is Hello World from Mike

Gliimly is at https://gliimly.github.io/.

Writing a service is the same as writing a command-line program in Gliim. Both take the same input and produce the same output, so you can test with either one to begin with.

For that reason, create first Hello World as a command-line program.

The only thing to do afterwards is to start up Hello World as application server:
mgrg hello

Now there's a number of resident processes running, expecting clients requests. You can see those processes:
ps -ef|grep hello

The result:
bear       25772    2311  0 13:04 ?        00:00:00 mgrg hello
bear       25773   25772  0 13:04 ?        00:00:00 /var/lib/gg/bld/hello/hello.srvc
bear       25774   25772  0 13:04 ?        00:00:00 /var/lib/gg/bld/hello/hello.srvc
bear       25775   25772  0 13:04 ?        00:00:00 /var/lib/gg/bld/hello/hello.srvc
bear       25776   25772  0 13:04 ?        00:00:00 /var/lib/gg/bld/hello/hello.srvc
bear       25777   25772  0 13:04 ?        00:00:00 /var/lib/gg/bld/hello/hello.srvc

"mgrg hello" runs the Gliim process manager for application "hello". A number of ".../hello.srvc" processes are server processes that will handle service request sent to application "hello".

Now, to test your service, you can send a request to the server from command line (by using "--service" option):
gg -r --req="/hello-world/name=Mike" --exec --silent-header --service

The above will make a request to one of the processes above, which will then reply:
This is Hello World from Mike


To access a Gliim service on the web, you need to have a web server or load balancer (think Apache, Nginx, HAProxy etc.).

This assumes you have completed the Hello World as a Service, with a service built and tested via command line.

In this example, Nginx web server is used; edit its configuration file. For Ubuntu and similar:
sudo vi /etc/nginx/sites-enabled/default

while on Fedora and other systems it might be:
sudo vi /etc/nginx/nginx.conf

Add the following in the "server {}" section:
location /hello/ { include /etc/nginx/fastcgi_params; fastcgi_pass  unix:///var/lib/gg/hello/sock/sock; }

"hello" refers to your Hello World application. Finally, restart Nginx:
sudo systemctl restart nginx

Now you can call your web service, from the web. In this case it's probably a local server (127.0.0.1) if you're doing this on your own computer. The URL would be:
http://127.0.0.1/hello/hello-world/name=Mike

Note the URL request structure: first comes the application path ("/hello") followed by request path ("/hello-world") followed by URL parameters ("/name=Mike"). The result:

This is a cache server that can add, delete and query key/value pairs, with their number limited only by available memory.

We'll use "index" type, which is a high-performance data structure. For example, with 1,000,000 keys it will take only about 20 comparisons to find any key; and the range search is just one hop. Index is based on a modified AVL/B tree.

Create new "index" application first, in a new directory (you can name it anything you like):
mkdir -p index
cd index

The mgrg command is a Gliimly service manager and here it will create a new application named "index" (it can be different from the directory it's in):
sudo mgrg -i -u $(whoami) index

Create a source code file "srv.gliim":
vi srv.gliim

and copy and paste this:
 begin-handler /srv public
     do-once
         new-index ind process-scope
     end-do-once
     get-param op
     get-param key
     get-param data
     if-true op equal "add"
         write-index ind key (key) value data status st
         if-true st equal GG_ERR_EXIST
             @Key exists [<<p-out key>>]
         else-if
             @Added [<<p-out key>>]
         end-if
     else-if op equal "delete"
         delete-index ind key (key) value val status st
         if-true st equal GG_ERR_EXIST
             @Not found [<<p-out key>>]
         else-if
             @Deleted, old value was [<<p-out val>>]
         end-if
     else-if op equal "query"
         read-index ind equal (key) value val status st
         if-true st equal GG_ERR_EXIST
             @Not found, queried [<<p-out key>>]
         else-if
             @Value [<<p-out val>>]
         end-if
     end-if
 end-handler

A service will run as a single process because each operation is handled very fast, even with large number of concurrent requests.
Build a service
gg -q

Run as service
mgrg -w 1 index

The above will start a single server process (-w 1) to serve incoming requests.
Test it
This is a bash test script to insert 3 keys into your cache server, query them, then delete them. Create "test_tree" file:
vi test_tree

And copy/paste the following:
#Add 3 key/data pairs. Key value is 0,1,2... and data values are "data_0", "data_1", "data_2" etc.
for i in {1..3}; do
   gg -r --req="/srv/op=add/key=$i/data=data_$i" --exec --service --app="/index" --silent-header
done
echo "Keys added"

#Query all 3 keys and check that values retrieved are the correct ones.
for i in {1..3}; do
   gg -r --req="/srv/op=query/key=$i" --exec --service --app="/index" --silent-header
done
echo "Keys queried"

#Delete all 3 keys
ERR="0"
for i in {1..3}; do
   gg -r --req="/srv/op=delete/key=$i" --exec --service --app="/index" --silent-header
done
echo "Keys deleted"

Make sure it's executable and run it:
chmod +x test_tree
./test_tree

The result is this:
Added [1]
Added [2]
Added [3]
Keys added
Value [data_1]
Value [data_2]
Value [data_3]
Keys queried
Deleted, old value was [data_1]
Deleted, old value was [data_2]
Deleted, old value was [data_3]
Keys deleted


This example shows Apache as the front-end (or "reverse proxy") for cache server - it's assumed you've completed it first. Three steps to setting up Apache quickly:
  1. Enable FastCGI proxy used to communicate with Gliimly services - this is one time only:
  2. Edit Apache configuration file:
    Add this to the end of the configuration file - note "index" is the application name you created in the above example:
    ProxyPass "/index/" unix:///var/lib/gg/index/sock/sock|fcgi://localhost/index

  3. Restart Apache.
- Test web service

Now you can call your web service, from the web. In this case it's probably a local server (127.0.0.1) if you're doing this on your own computer. The URLs would be, for example to add, query and delete a key/value pair:
http://127.0.0.1/index/srv/op=add/key=1/data=d_1

http://127.0.0.1/index/srv/op=query/key=1

http://127.0.0.1/index/srv/op=delete/key=1

Note the URL request structure: first comes the application path ("/index") followed by request path ("/srv") followed by URL parameters (such as "/op=add/key=1/data=d_1"). The result in web browser are messages informing you that the key was added, queried or deleted.
Create a directory for your project, it'll be where this example takes place. Also create Gliimly application "stock":
mkdir -p stock-app
cd stock-app
sudo mgrg -i -u $(whoami) stock

Start MariaDB command line interface:
sudo mysql

Create an application user, database and a stock table (with stock name and price):
create user stock_user;
create database stock_db;
grant all privileges on stock_db.* to stock_user@localhost identified by 'stock_pwd';
use stock_db
create table if not exists stock (stock_name varchar(100) primary key, stock_price bigint);

Gliimly wants you to describe the database: the user name and password, database name, and the rest is the default setup for MariaDB database connection. So create a file "db_stock" (which is your database configuration file, one per each you use):
vi db_stock

and copy and paste this:
[client]
user=stock_user
password=stock_pwd
database=stock_db
protocol=TCP
host=127.0.0.1
port=3306

Now to the code. Here's the web service to insert stock name and price into the stock table - create file "add-stock.gliim":
vi add-stock.gliim

and copy and paste this:
 %% /add-stock public
     @<html>
         @<body>
         get-param name
         get-param price
         // Add data to stock table, update if the stock exists
         run-query @db_stock = "insert into stock (stock_name, stock_price) values ('%s', '%s') on duplicate key update stock_price='%s'" \
             input name, price, price error err no-loop
         if-true err not-equal "0"
             report-error "Cannot update stock price, error [%s]", err
         end-if
         @<div>
             @Stock price updated!
         @</div>
         @</body>
     @</html>
 %%

Next is the web service to display a web page with all stock names and prices from the stock table - create file "show-stock.gliim":
vi show-stock.gliim

and copy and paste this:
 %% /show-stock public
     @<html>
         @<body>
             @<table>
                 @<tr>
                     @<td>Stock name</td>
                     @<td>Stock price</td>
                 @</tr>
                 run-query @db_stock = "select stock_name, stock_price from stock" output stock_name, stock_price
                     @<tr>
                         @<td>
                         p-out stock_name
                         @</td>
                         @<td>
                         p-out  stock_price
                         @</td>
                     @</tr>
                 end-query
             @</table>
         @</body>
     @</html>
 %%

You're done! Now it's time to make your application. You need to tell Gliimly that your database configuration file "db_stock" is MariaDB (because you could use PostgreSQL or SQLite for instance):
gg -q --db="mariadb:db_stock"

Test your web service. Here you'd run it as a command line program. That's neat because you can test your web services without even using a web server or a browser:
gg -r --req="/add-stock/name=ABC/price=882" --exec
gg -r --req="/add-stock/name=XYZ/price=112" --exec

The result for each:
Content-Type: text/html;charset=utf-8
Cache-Control: max-age=0, no-cache
Pragma: no-cache
Status: 200 OK

<html>
<body>
<div>
Stock price updated!
</div>
</body>
</html>

And to test the list of stocks:
gg -r --req="/show-stock" --exe

The result:
Content-Type: text/html;charset=utf-8
Cache-Control: max-age=0, no-cache
Pragma: no-cache
Status: 200 OK

<html>
<body>
<table>
<tr>
<td>Stock name</td>
<td>Stock price</td>
</tr>
<tr>
<td>
ABC</td>
<td>
882</td>
</tr>
<tr>
<td>
XYZ</td>
<td>
112</td>
</tr>
</table>
</body>
</html>

You can see the actual response, the way it would be sent to a browser, or an API web client, or any other kind of web client.

Memory safety guards against software security risks and malfunctions by assuring data isn't written to or read from unintended areas of memory. It prevents leaks, and that's important for web services which are long-running processes - memory leaks usually lead to running out of memory and crashes.

But what of the performance cost of memory safety?  Gliimly is a very high level programming language. It's not like other languages, and you can intuitively experience that just by looking at the code. It feels more like speaking in English than moving bits and bytes or calling APIs.

So when you build your web services, you won't write a lot of code, rather you'd express what you want done in a declarative language, and natively-compiled high-performance C code will do the rest. This code is designed to be memory safe, but because it's C, it avoids the penalty of being implemented in a general-purpose memory-safe language, where everything that's done, from bottom up and top down, would be subject to memory-safety checks.

As a result, the cost incurred on memory safety is mostly in checking input data of such statements and not in the actual implementation which is where most of the performance penalty would be. In addition, the output of Gliimly statements is generally new immutable memory, hence it needs no checking. This means memory safety checks are truly minimal, and likely close to a theoretical minimum.

Gliimly also has a light implementation of memory safety. One example is that any memory used in a request is by default released at the end of it, and not every time the memory's out of scope, which saves a lot of run-time checks. You can still have the "heavy" implementation if you're short on RAM memory, but chances are you won't need it. In short, "light" is good because the best way not to pay a heavy price for a slow memory-safe system is not to have one.

In summary, the choices made when designing and implementing a memory safe programming language profoundly affect the resulting performance.
This is a complete SaaS example (Software-as-a-Service) using PostgreSQL as a database, and Gliimly as a web service engine; it includes user signup/login/logout with an email and password, separate user accounts and data, and a notes application. All in about 200 lines of code!

First create a directory for your application, where the source code will be:
mkdir -p notes
cd notes

Setup Postgres database
Create PostgreSQL user (with the same name as your logged on Linux user, so no password needed), and the database "db_app":
echo "create user $(whoami);
create database db_app with owner=$(whoami);
grant all on database db_app to $(whoami);
\q"  | sudo -u postgres psql

Create a database configuration file to describe your PostgreSQL database above:
echo "user=$(whoami) dbname=db_app" > db_app

Create database objects we'll need - users table for application users, and notes table to hold their notes:
echo "create table if not exists notes (dateOf timestamp, noteId bigserial primary key, userId bigint, note varchar(1000));
create table if not exists users (userId bigserial primary key, email varchar(100), hashed_pwd varchar(100), verified smallint, verify_token varchar(30), session varchar(100));
create unique index if not exists users1 on users (email);" | psql -d db_app

Create Gliimly application
Create application "notes" owned by your Linux user:
sudo mgrg -i -u $(whoami) notes

Source code
This executes before any other handler in an application, making sure all requests are authorized, file "before-handler.gliim":
vi before-handler.gliim

 before-handler
     set-param displayed_logout = false, is_logged_in = false
     call-handler "/session/check"
 end-before-handler


- Signup users, login, logout

This is a generic session management web service that handles user creation, verification, login and logout. Create file "session.gliim":
vi session.gliim

Copy and paste:
 // Display link to login or signup
 %% /session/login-or-signup private
     @<a href="<<p-path "/session/user/login">>">Login</a> &nbsp; &nbsp; <a href="<<p-path "/session/user/new/form">>">Sign Up</a><hr/>
 %%
 // Login with email and password, and create a new session, then display home pag
 %% /session/login public
     get-param pwd, email
     hash-string pwd to hashed_pwd
     random-string to sess_id length 30
     run-query @db_app = "select userId from users where email='%s' and hashed_pwd='%s'" output sess_user_id : email, hashed_pwd
         run-query @db_app no-loop = "update users set session='%s' where userId='%s'" input sess_id, sess_user_id affected-rows arows
         if-true arows not-equal 1
             @Could not create a session. Please try again. <<call-handler "/session/login-or-signup">> <hr/>
             exit-handler
         end-if
         set-cookie "sess_user_id" = sess_user_id path "/", "sess_id" = sess_id path "/"
         call-handler "/session/check"
         call-handler "/session/show-home"
         exit-handler
     end-query
     @Email or password are not correct. <<call-handler "/session/login-or-signup">><hr/>
 %%
 // Starting point of the application. Either display login form or a home page:
 %% /session/start public
     get-param action, is_logged_in type bool
     if-true is_logged_in equal true
         if-true action not-equal "logout"
             call-handler "/session/show-home"
             exit-handler
         end-if
     end-if
     call-handler "/session/user/login"
 %%
 // Generic home page, you can call anything from here, in this case a list of note
 %% /session/show-home private
     call-handler "/notes/list"
 %%
 // Logout user and display home, which will ask to either login or signup
 %% /session/logout public
     get-param is_logged_in type bool
     if-true is_logged_in equal true
         get-param sess_user_id
         run-query @db_app = "update users set session='' where userId='%s'" input sess_user_id no-loop affected-rows arows
         if-true arows equal 1
             set-param is_logged_in = false
             @You have been logged out.<hr/>
             commit-transaction @db_app
         end-if
     end-if
     call-handler "/session/show-home"
 %%
 // Check session based on session cookie. If session cookie corresponds to the email address, the request is a part of an authorized session
 %% /session/check private
     get-cookie sess_user_id="sess_user_id", sess_id="sess_id"
     set-param sess_id, sess_user_id
     if-true sess_id not-equal ""
         set-param is_logged_in = false
         run-query @db_app = "select email from users where userId='%s' and session='%s'" output email input sess_user_id, sess_id row-count rcount
             set-param is_logged_in = true
             get-param displayed_logout type bool
             if-true displayed_logout equal false
                 get-param action
                 if-true action not-equal "logout"
                     @Hi <<p-out email>>! <a href="<<p-path "/session/logout">>">Logout</a><br/>
                 end-if
                 set-param displayed_logout = true
             end-if
         end-query
         if-true rcount not-equal 1
             set-param is_logged_in = false
         end-if
     end-if
 %%
 // Check that email verification token is the one actually sent to the email address
 %% /session/verify-signup public
     get-param code, email
     run-query @db_app = "select verify_token from users where email='%s'" output db_verify : email
         if-true  code equal db_verify
             @Your email has been verifed. Please <a href="<<p-path "/session/user/login">>">Login</a>.
             run-query @db_app no-loop = "update users set verified=1 where email='%s'" : email
             exit-handler
         end-if
     end-query
     @Could not verify the code. Please try <a href="<<p-path "/session/user/new/verify-form">>">again</a>.
     exit-handler
 %%
 // Display login form that asks for email and password
 %% /session/user/login public
     call-handler "/session/login-or-signup"
     @Please Login:<hr/>
     @<form action="<<p-path "/session/login">>" method="POST">
     @<input name="email" type="text" value="" size="50" maxlength="50" required autofocus placeholder="Email">
     @<input name="pwd" type="password" value="" size="50" maxlength="50" required placeholder="Password">
     @<button type="submit">Go</button>
     @</form>
 %%
 // Display form for a new user, asking for an email and password
 %% /session/user/new/form public
     @Create New User<hr/>
     @<form action="<<p-path "/session/user/new/create">>" method="POST">
     @<input name="email" type="text" value="" size="50" maxlength="50" required autofocus placeholder="Email">
     @<input name="pwd" type="password" value="" size="50" maxlength="50" required placeholder="Password">
     @<input type="submit" value="Sign Up">
     @</form>
 %%
 // Send verification email
 %% /session/user/new/send-verify private
     get-param email, verify
     write-string msg
         @From: service@your-service.com
         @To: <<p-out email>>
         @Subject: verify your account
         @
         @Your verification code is: <<p-out verify>>
     end-write-string
     exec-program "/usr/sbin/sendmail" args "-i", "-t" input msg status st
     if-true st not-equal 0 or true equal false
         @Could not send email to <<p-out email>>, code is <<p-out verify>>
         set-param verify_sent = false
     else-if
         set-param verify_sent = true
     end-if
 %%
 // Create new user from email and password
 %% /session/user/new/create public
     get-param email, pwd
     hash-string pwd to hashed_pwd
     random-string to verify length 5 number
     begin-transaction @db_app
     run-query @db_app no-loop = "insert into users (email, hashed_pwd, verified, verify_token, session) values ('%s', '%s', '0', '%s', '')" input email, hashed_pwd, verify affected-rows arows error err on-error-continue
     if-true err not-equal "0" or arows not-equal 1
         call-handler "/session/login-or-signup"
         @User with this email already exists.
         rollback-transaction @db_app
     else-if
         set-param email, verify
         call-handler "/session/user/new/send-verify"
         get-param verify_sent type bool
         if-true verify_sent equal false
             rollback-transaction @db_app
             exit-handler
         end-if
         commit-transaction @db_app
         call-handler "/session/user/new/verify-form"
     end-if
 %%
 // Display form to enter the code emailed to user to verify the email address
 %% /session/user/new/verify-form public
     get-param email
     @Please check your email and enter verification code here:
     @<form action="<<p-path "/session/verify-signup">>" method="POST">
     @<input name="email" type="hidden" value="<<p-out email>>">
     @<input name="code" type="text" value="" size="50" maxlength="50" required autofocus placeholder="Verification code">
     @<button type="submit">Verify</button>
     @</form>
 %%

- Notes application

This is the actual application that uses above session management services. Create file "notes.gliim":
vi notes.gliim

Copy and paste:
 // Delete a note
 %% /notes/delete public
     call-handler "/notes/header"
     get-param sess_user_id, note_id
     run-query @db_app = "delete from notes where noteId='%s' and userId='%s'" : note_id, sess_user_id \
             affected-rows arows no-loop error errnote
     if-true arows equal 1
         @Note deleted
     else-if
         @Could not delete note (<<p-out errnote>>)
     end-if
 %%
 // Display a form to add a note
 %% /notes/form-add  public
     call-handler "/notes/header"
     @Add New Note
     @<form action="<<p-path "/notes/add">>" method="POST">
     @<textarea name="note" rows="5" cols="50" required autofocus placeholder="Enter Note"></textarea>
     @<button type="submit">Create</button>
     @</form>
 %%
 // Add a note
 %% /notes/add public
     call-handler "/notes/header"
     get-param note, sess_user_id
     run-query @db_app = "insert into notes (dateOf, userId, note) values (now(), '%s', '%s')" : sess_user_id, note \
             affected-rows arows no-loop error errnote
     if-true arows equal 1
         @Note added
     else-if
         @Could not add note (<<p-out errnote>>)
     end-if
 %%
 // List all notes
 %% /notes/list public
     call-handler "/notes/header"
     get-param sess_user_id
     run-query @db_app = "select dateOf, note, noteId from notes where userId='%s' order by dateOf desc" \
             input sess_user_id output dateOf, note, noteId
         match-regex "\n" in note replace-with "<br/>\n" result with_breaks status st cache
         if-true st equal 0
             set-string with_breaks = note
         end-if
         @Date: <<p-out dateOf>> (<a href="<<p-path "/notes/ask-delete">>?note_id=<<p-out noteId>>">delete note</a>)<br/>
         @Note: <<p-out with_breaks>><br/>
         @<hr/>
     end-query
 %%
 // Display a question whether to delete a note or not
 %% /notes/ask-delete public
     call-handler "/notes/header"
     get-param note_id
     @Are you sure you want to delete a note? Use Back button to go back,\
        or <a href="<<p-path "/notes/delete">>?note_id=<<p-out note_id>>">delete note now</a>.
 %%
 // Check if session is authorized, and display an appropriate header
 %% /notes/header private
     get-param is_logged_in type bool
     if-true is_logged_in equal false
         call-handler "/session/login-or-signup"
     end-if
     @<h1>Welcome to Notes!</h1><hr/>
     if-true is_logged_in equal false
         exit-handler
     end-if
     @<a href="<<p-path "/notes/form-add">>">Add Note</a> <a href="<<p-path "/notes/list">>">List Notes</a><hr/>
 %%

Build application
gg -q --db=postgres:db_app

Run web services application server
mgrg notes

Emailing
In order to use this example, you need to be able to email local users, which means email addresses such as \"myuser@localhost\". To do that, install postfix (or sendmail). On Debian systems (like Ubuntu):
sudo apt install postfix
sudo systemctl start postfix

and on Fedora systems (like RedHat):
sudo dnf install postfix
sudo systemctl start postfix

When the application sends an email to a local user, such as <OS user>@localhost, then you can see the email sent at:
sudo vi /var/mail/<OS user>

Setup Nginx
A web server sits in front of Gliimly application server, so it needs to be setup. This example is for Ubuntu, so edit Nginx config file there:
sudo vi /etc/nginx/sites-enabled/default

Add this in "server {}" section:
location /notes/ { include /etc/nginx/fastcgi_params; fastcgi_pass  unix:///var/lib/gg/notes/sock/sock; }

Restart Nginx:
sudo systemctl restart nginx

You're done, run it!
Go to your web browser, and enter:
http://127.0.0.1/notes/session/start


A web service doesn't necessarily need to be called from "the web", meaning from the web browser or via API across the web. It can be called from another web service that's on a local network.

Typically, when called from the web, HTTPS protocol is used to ensure safety of that call. However, local networks are usually secure, meaning no one else has access to it but your own web services.

Thus, communication between local web services will be much faster if it doesn't use a secure protocol as it incurs the performance cost. Simple, fast and unburdened protocols, such as FastCGI, may be better.

FastCGI is interesting because it actually carries the same information as HTTP, so a web service can operate normally, using GET/POST/etc. request methods, passing parameters in URL, request body, environment variables etc. But at the same time, FastCGI is a fast binary protocol that doesn't incur cost of safety - and for local web-service to web-service communication, that's a good thing.

Just like HTTP, FastCGI can separate standard output from standard error, allowing both streams to be intermixed, but retrieved separately.

Overall, inter-web-service communication can be implemented with the aforementioned protocols in a way that preserves high-level HTTP functionality innate to web services, but with overall better performance.
The security of web services is a broad subject, but there are a few common topics that one should be well aware. They affect if your web application, or an API service, will be a success or not.

Many web services provide an interface to a database. SQL injection is a very old, but nonetheless important issue. A naive implementation of SQL execution inside your code could open the door for a malicious actor to drop your tables, insert/update/delete records etc. Be sure that your back-end software is SQL injection proof.

No matter how careful you are about writing your web service code, memory safety is an important feature to have. It avoids exploitations such as buffer overwrites or underwrites, which can cause unexpected behavior. In addition, proper memory leak detection and prevention is important, especially because back-end services typically run as long-running processes, often times weeks and months without down-time. A leak could crash such a process simply because it would run out of memory, or could no longer open a new file.

A common issue is one of the security design. Some services are internal (or private), and some are external (or public). This means some are not meant to be called by an end-user from a web browser, or via an API from an outside caller. In fact, calling such service could be huge security hole. Imagine if you had a service that updates your internal application data. Such a service clearly must never be called by an outside actor, rather it should only be accessible to your own services acting on behalf of such outside actors. Be sure that the back-end software you use has simple and clearly-defined ability to handle such scenarios. Even if so, sometimes overly complicated security schemes can be hard to implement correctly and can be just as detrimental.

Virtually all web services authenticate their users. It usually means using email address and password for login, as well as cookies for browser users, or tokens otherwise. Never keep passwords in plain text or even obscured, only as a one-way hash. This way even if your user database is stolen, passwords cannot be (easily) recovered.

This example will create a service that inserts key and value into SQLite table. It's tested from command line.

Create a directory and then switch to it:
mkdir sqlite
cd sqlite

Setup SQLite database in file "mydata.db":
echo 'drop table if exists key_value; 
create table if not exists key_value (key varchar(50) primary key, value varchar(100));' | sqlite3 mydata.db

Create configuration file "mydb" that describes the SQLite database "mydata.db":
echo "$(pwd)/mydata.db"> mydb

Create the application:
sudo mgrg -i -u $(whoami) sqlite

Create file insert.gliim:
vim insert.gliim

Copy and paste to it (note use of database configuration "mydb" in @mydb):
 %% /insert public
     get-param key
     get-param value
     run-query @mydb = "insert into key_value(key,value) values ('%s', '%s')" input key, value error err affected-rows aff_rows no-loop
     @Error <<p-out err>>, affected rows <<p-num aff_rows>>
 %%

Compile the application - we specify that file "mydb" is describing SQLite database:
gg -q --db=sqlite:mydb

Run the application by executing this service from command line. Note passing the input parameters "key" and "value" to it:
gg -r --req="/insert/key=1/value=one" --exec --silent-header

The output is:
Error 0, affected rows 1

Verify data inserted:
echo -e ".headers off\n.mode line\nselect key "Key", value "Value" from key_value"|sqlite3 mydata.db

The result:
Key = 1
Value = one


Gliimly is a new programming language and framework for developing web services and web applications. The reason for Gliimly is to make software development easier, more reliable and to improve run-time performance. To do this, Gliimly is a very high level language yet it's a high performance one; those two qualities aren't usually together.

Gliimly is a declarative language designed for simplicity. That means top-down approach, rather than bottom-up: it's more about describing what to do than coding it. It's a modeling language where pieces are assembled together quickly and with confidence. It's about the framework to create and deploy web services based on what they need to do from human perspective, more so than the technical one.

Underlying Gliimly's functionality are industry-standard Open Source libraries, such as SSL, Curl, MariaDB and others, in addition to native Gliimly's.

In extended mode, Gliimly is extensible with any standard libraries. You can also include C files directly in your project to compile with it. In this mode, Gliimly (obviously) does not guaratee memory safety, but it does not necessarily mean it's not safe either.

Gliimly is very simple to work with - it doesn't even have expressions in a sense other languages do, save for very basic integer expressions (with plus, minus, divide, multiply). This is by design to reduce comlexity and improve performance. Gliimly's statements aim to deliver complete functionality without complicated coding, and are customizable to a great extent, statically optimized at compile-time for performance.

Gliimly installation comes with a Vim module for highlighting Gliimly code. After installing Gliimly, run this to install the Vim module:
gg -m

The default color scheme looks like this:
To change color scheme, type ":colorscheme " in command mode, then press Tab to see available color schemes. Press Enter to choose one. For instance, in 'darkblue' color scheme, it may look like:
To make the change permanent, edit file ".vimrc" in home directory:
vi ~/.vimrc

and append line:
colorscheme darkblue



Gliimly is a new programming language and framework for developing web services and web applications. It is:
Gliimly is at https://gliimly.github.io/.

Gliimly is a new programming language and framework for developing web services and web applications. The reason for Gliimly is to make software development easier, more reliable and to improve run-time performance. To do this, Gliimly is a very high level language yet it's a high performance one; those two qualities aren't usually together.

Gliimly is a declarative language designed for simplicity. That means top-down approach, rather than bottom-up: it's more about describing what to do than coding it. It's a modeling language where pieces are assembled together quickly and with confidence. It's about the framework to create and deploy web services with less effort and faster.

Gliimly is a memory-safe language. Your program is safe from overwriting memory it shouldn't overwrite, and it won't leave dangling pointers hanging around. Gliimly is a static-typed language with only three basic types (strings, numbers and boolean) and (currently) the following structured types: service, message, array, index, index-cursor, fifo, lifo, list, split-string and file.

Gliimly is also a high-performance compiled language, designed to create fast and small native executables without interpreters or p-code.

Memory safe languages often suffer performance bottlenecks due to the fact that range checking, garbage collection and other memory management techniques do take their toll.

Gliimly is designed from ground up to alleviate these issues. Firstly, the best way not to lose performance on expensive memory management is not to have one. By default, Gliimly has a light-weight memory safety implementation, and you can expand it if your system is short on memory. In addition, its run-time libraries are written entirely in C and the run-time overhead comes at the input and output of Gliimly statements, and not within libraries itself. Since libraries do most of the run-time work, the impact of imposing memory safety is minimal.

Underlying Gliimly's functionality are industry-standard Open Source libraries, such as SSL, Curl, MariaDB and others, in addition to native Gliimly's.

In extended mode, Gliimly is extensible with any standard libraries (with C interop), which means most programming languages (including C/C++, Rust etc.). In this mode, Gliimly (obviously) does not guaratee memory safety, but it does not necessarily mean it's not safe either.

Gliimly is very simple to work with - it doesn't even have expressions in a sense other languages do, save for very basic integer expressions (with plus, minus, divide, multiply). This is by design to reduce comlexity and improve performance. Gliimly's statements aim to deliver complete functionality without complicated coding, and are customizable to a great extent, statically optimized at compile-time for performance.

Web service is code that responds to a request and provides a reply over HTTP protocol. It doesn't need to work over the web, despite its name. You can run a web service locally on a server or on a local network. You can even run a web service from command line. In fact, that's an easy way to test them.

The input comes from an HTTP request - this means via URL parameters plus (optional) request body.

The parameters could be in URL's path (such as "/a=b/c=d/...") or in its query string (such as "?a=b&c=d...") or both - this data is typically limited in size to 2KB. Additional parameters could be appended to the request body - this is for instance how files are uploaded in an HTML form.  

Request body itself can be any data of any size really - web services typically have an adjustable size limit for this data just to avoid mistakenly (or maliciously) huge ones. A request body could contain for example a JSON document, or some other kind of data.

The output of web service can be HTML code, JSON, XML, an image such as JPG or just about anything really. It's up to the caller of web service to interpret it. One such caller is web browser, another one could be API from an application etc.

What's the difference between a web application and a web service? Well, technically a web application should be a collection of web services, which are typically more basic service providers. That's why web services are often used as endpoints for remote APIs. They generally have a well defined input and output and are not too big. They serve a specialized purpose most of the time.
 Before handler

Purpose: Execute your code before a request is handled.

 before-handler
 ...
 end-before-handler

Every Gliimly request goes through a request dispatcher (see request()), which is auto-generated. In order to specify your code to execute before a request is handled, create source file "before-handler.gliim" and implement code that starts with "before-handler" and ends with "end-before-handler", which will be automatically picked up and compiled with your application.

If no request executes (for example if your application does not handle a given request), before-handler handler does not execute either.
Examples
Here is a simple implementation of before-handler handler that just outputs "Getting Started!!":
 before-handler
     @Getting Started!!
 end-before-handler

See also
Service processing
after-handler  
before-handler  
begin-handler  
call-handler  
See all
documentation
 Begin handler

Purpose: Define a request handler.

 begin-handler <request path> [ private | public ]
     <any code>
 end-handler

begin-handler starts the implementation of a request handler for <request path> (see request), which is <any code> up to end-handler. <request path> is not quoted.

A <request path> is a path consisting of any number of path segments. A request path can have alphanumeric characters, hyphens and forward slashes, and can start only with a forward slash.

For example, a <request path> can be "/wine-items" or "/items/wine" etc. In general, it represents the nature of a request, such as an action on an object, a resource path handled by it etc. There is no specific way to interpret a request path, and you can construct it in a way that works for you.

The source ".gliim" file name that implements a given begin-handler matches its path and name, fully or partially (see request). For example, <request path> of "/items/wine" might be implemented in  "items/wine.gliim" file (meaning in file "wine.gliim" in subdirectory "items").

Note that you can also use "%%" instead of either begin-handler or end-handler or both.
Security of request calls
If "public" clause is used, then a handler can be called from an outside caller, be it a web browser, some web service, service call or command-line program.

If "private" clause is used, then a handler cannot be called from an outside caller; it can only be called from another handler by using call-handler statement.

If neither "public" nor "private" is used, then the default is "private". This default mechanism automatically guards direct execution by outside callers of all handlers not marked "public"; it provides automatic safety guard.

You can change this default behavior with "--public" option in gg, in which case the default is "public". This is useful if either all request handlers should be public, or if only a handful fixed ones are private.
Examples
The following begin-handler is implemented in file "items/wines/red-wine.gliim":
 begin-handler  /items/wines/red-wine public
     @This is a request handler to display a list of red wines!
 end-handler

Another way to write this is:
 %%  /items/wines/red-wine public
     @This is a request handler to display a list of red wines!
 %%

See also
Service processing
after-handler  
before-handler  
begin-handler  
call-handler  
See all
documentation
 Begin transaction

Purpose: Begins database transaction.

 begin-transaction [ @<database> ] \
     [ on-error-continue | on-error-exit ] \
     [ error <error> ] [ error-text <error text> ] \
     [ options <options> ]

This statement begins a database transaction.

<options> (in "options" clause) is any additional options to send to database you wish to supply for this functionality.

Once you start a transaction with begin-transaction, you must either commit it with commit-transaction or rollback with rollback-transaction. If you do neither, your transaction will be rolled back once the request has completed and your program will stop with an error message. This is because opening a transaction and leaving without committing or a rollback is a bug in your program.  

You must use begin-transaction, commit-transaction and rollback-transaction instead of calling this functionality through run-query.
Database
<database> is specified in "@" clause and is the name of the database-config-file. If ommited, your program must use exactly one database (see --db option in gg).
Error handling
The error code is available in <error> variable in "error" clause - this code is always "0" if successful. The <error> code may or may not be a number but is always returned as a string value. In case of error, error text is available in "error-text" clause in <error text> string.

"on-error-continue" clause specifies that request processing will continue in case of an error, whereas "on-error-exit" clause specifies that it will exit. This setting overrides database-level db-error for this specific statement only. If you use "on-error-continue", be sure to check the error code.

Note that if database connection was lost, and could not be reestablished, the request will error out (see error-handling).
Examples
 begin-transaction @mydb
 run-query @mydb="insert into employee (name, dateOfHire) values ('%s', now())" input "Terry" no-loop
 commit-transaction @mydb

See also
Database
begin-transaction  
commit-transaction  
current-row  
database-config-file  
db-error  
mariadb-database  
postgresql-database  
rollback-transaction  
run-query  
sqlite-database  
See all
documentation
 Break loop

Purpose: Exit a loop.

 break-loop

break-loop will exit a loop between start-loop and end-loop, run-query and end-query, or read-line and end-read-line statements. Execution continues right after the end of the loop.
Examples
Exit the loop after 300 loops:
 set-number max_loop = 300
 start-loop repeat 1000 use i start-with 1
     @Completed <<p-num i>> loops so far
     if-true i equal max_loop
         break-loop
     end-if
 end-loop

See also
Program flow
break-loop  
code-blocks  
continue-loop  
do-once  
exit-handler  
if-defined  
if-true  
set-bool  
start-loop  
See all
documentation
 Call extended

Purpose: Call external function or macro (extended mode only).

 call-extended <function> "( " [ & ]<variable>  [ , ... ]  " )"

call-extended calls <function> (which can be a function or macro) with a list of parameter variables. The <function> is defined either in:
The <function> must be declared via C-style declaration in a ".h" file residing in the application source code directory. You can use "--lflag" and "--cflag" options of gg to supply libraries used. In addition, if you need to, you can also have any number of ".c" and ".h" files which will be automatically included in your project. A macro must be defined in ".h" file.

call-extended statement can only be used in extended mode (see extended-mode). By default, Gliimly code runs in safe mode which does not allow use of call-extended statement. Note that using call-extended statement does not automatically make your application unsafe; rather, extended code can be written in a memory-safe language (such as Rust), or even if written in C it can be made in such a way not to cause out-of-band memory reads and writes.
C signature, input/output variables, types
Each <variable> can be of C type (or a pointer to C type):
A <function> should not return a value. Rather, use a variable passed as a pointer if you wish to pass the function's output back to your Gliimly code.
Examples
For instance, consider C file "calc.c":
 #include "gliim.h"

 // Compute factorial of f, and store result into res
 void factorial(gg_num f, gg_num *res)
 {
     *res = 1;
     gg_num i;
     for (i = 2; i <= f; i++) {
         *res *= i;
     }
 }

Declare this C function in a header file, for instance "calc.h":
 void factorial(gg_num f, gg_num *res);

You can also have macros in a header file, so for example "calc.h" could be:
 void factorial(gg_num f, gg_num *res);

 #define mod10(n, m) m=(n)%10

In this case you have defined a macro that calculates the moduo of 10 and stores a result into another variable.

Use these in your Gliimly code with call-extended statement, for instance to use a function "factorial()":
 extended-mode

 begin-handler /fact public
     set-number fact
     call-extended factorial (10, &fact)
     p-num fact
 end-handler

In the above example, number "fact" is passed by reference (as a pointer), and it will contain the value of factorial of 10 on return. The result printed out is "3628800".

To use macro "mod10()":
 extended-mode

 begin-handler /mod public
     set-number mod
     call-extended mod10(103, mod)
     p-num mod
 end-handler

In this example, you are using a C macro, so number "fact" is assigned a value directly, per C language rules. The result printed out is "3".
See also
Safety
call-extended  
extended-mode  
See all
documentation
 Call handler

Purpose: Call another handler within the same process.

 call-handler <request path>

Calls another handler within the same request in the same process. You can call any handler within the same application.

<request path> is the request path served by the handler being called. It can be a string variable or a constant.

Use set-param and get-param to pass parameters between the caller and callee handlers.

call-handler uses the same high-performance hash table used by a request to route requests by name.
Examples
The following example demonstrate calling a call-handler twice, and also using its output inline in the caller. An input parameter is passed to it, and an output obtained:

Copy to file "callsub.gliim":
 %% /callsub public
     //
     // First call to call-handler
     //
     // Set input for call-handler
     set-param inp = "some string"
     (( s
     call-handler "/sub/service"
     ))
     // Get output from call-handler
     get-param out type string
     @<<p-out s>> with output [<<p-out out>>]

     //
     // Second call to call-handler
     //
     // Set input for call-handler called as inline code
     set-param inp = "new string"
     (( s
     @Output: <<call-handler "/sub/service">>
     ))
     // Get output from call-handler
     get-param out type string
     @<<p-out s>> with output [<<p-out out>>]
 %%

And in "sub/service.gliim" file (meaning file "service.gliim" in subdirectory "sub"):
 %% /sub/service private
     @This is sub!
     get-param inp
     (( out
     @got input: <<p-out inp>>
     ))
     set-param out = out
 %%

Create and build an application:
sudo mgrg -i -u $(whoami) subhandler
gg -q

Run it:
gg -r --req="/callsub" --exec --silent-header

The output:
This is sub! with output [got input: some string]
Output: This is sub! with output [got input: new string]

See also
Service processing
after-handler  
before-handler  
begin-handler  
call-handler  
See all
documentation
 Call remote

Purpose: Make a remote service call.

 call-remote <service> [ ,... ]   \
     [ status <status> ]  \
     [ started <started> ] \
     [ finished-okay <finished okay> ]

call-remote will make service call(s) as described in a single <service> or a list of <service>s. Unless only a single <service> is specified, each call will execute in parallel with others (as multiple threads). call-remote finishes when all <service> calls do. Each <service> must have beed created with new-remote.

A <service> call is made to a remote service. "Remote service" means a process accepting requests that is not the same process executing call-remote; it may be running on the same or a different computer, or it may be a different process started by the very same application.

- Multiple service calls in parallel
Executing multiple <service> calls in parallel is possible by specifying a list of <service>s separated by a comma.

There is no limit on how many <service>s you can call at the same time; it is limited only by the underlying Operating System resources, such as threads/processes and sockets.

- Call status
<status> number (in "status" clause) will be GG_OKAY if all <service> calls have each returned GG_OKAY; this means all have started and all have finished with a valid message from the service; or GG_ERR_FAILED if at least one did not (for example if the service could not be contacted, if there was a network error etc.); or GG_ERR_MEMORY if out of memory; or GG_ERR_TOO_MANY if there is too many calls (more than 1,000,000).

Note that GG_OKAY does not mean that the reply is considered a success in any logical sense; only that the request was made and a reply was received according to the service protocol.

- Request(s) status
Note that the actual application status for each <service>, as well as data returned and any application errors can be obtained via "handler-status", "data" and "error" clauses of read-remote statement, respectively.

- Request(s) duration
call-remote will wait for all <service> requests to finish. For that reason, it is a good idea to specify "timeout" clause in new-remote for each <service> used, in order to limit the time you would wait. Use read-remote to detect a timeout, in which case "handler-status" clause would produce GG_CLI_ERR_TIMEOUT.

- How many calls started and finished
<started> (in "started" clause) will be the number of service calls that have started. <finished okay> (in "finished-okay" clause) is the number of calls that have finished with return value of GG_OKAY as described above. By using <status>, <started> and <finished okay> you may surmise whether the results of call-remote meet your expectations.

- Performance, security
call-remote is faster than call-web because it does not use HTTP protocol; rather it only uses small and binary protocol, which is extremenly fast, especially when using Unix sockets on the same machine (see new-remote). Note that the binary protocol does not have any inherent security built-in; that is part of the reason why it is fast. As such, it is very well suited for remote service calls on the same machine or between networked machines on a secure network.
Examples
This example will connect to local Unix socket file "/var/lib/gg/app_name/sock/sock" (a Gliimly application named "app_name"), and make a request named "server" (i.e. it will be processed by source code file "server.gliim") with URL path of "/op=add/key=2" (meaning with input parameters "op=add" and "key=2"). Then, service reply is read and displayed.
 // Create single call
 new-remote srv location "/var/lib/gg/app_name/sock/sock" \
     method "GET" app-path "/app_name" request-path "/server" \
     url-params "/op=add/key=2"
 // Call single service call
 call-remote srv finished-okay sfok
 // Get results of a remote service call
 read-remote srv data rdata
 // Display results
 @Data from service is <<p-out rdata>>

If you are connecting to a service via TCP (and not with a Unix socket like in the example above), the "location" clause in new-remote might be:
 new-remote srv location "192.168.0.28:2400" \
     method "GET" app-path "/app_name" request-path "/server" \
     url-params "/op=add/key=2"

In this case, you are connecting to another service (running on IP "192.168.0.28") on port 2400. See mgrg on how to start a service that listens on a TCP port. You would likely use TCP connectivity only if a service you're connecting to is on a different computer.

See also new-remote.
See also
Distributed computing
call-remote  
new-remote  
read-remote  
run-remote  
See all
documentation
 Call web

Purpose: Get content of URL resource (call a web address).

 call-web <URL> \
     response <result> \
     [ response-code <response code> ] \
     [ response-headers <headers> ] \
     [ status <status> ] \
     [ method <request method> ] \
     [ request-headers \
         [ content-type <content type> ] \
         [ content-length <content length> ] \
         custom <header name>=<header value> [ , ... ] ] \
     [ request-body \
         ( [ fields <field name>=<field value> [ , ... ] ] \
             [ files <file name>=<file location> [ , ... ] ] ) \
         | \
         ( content <body content> ) \
     ] \
     [ error <error> ] \
     [ cert <certificate> | no-cert ] \
     [ cookie-jar <cookie jar> ] \
     [ timeout <timeout> ]

With call-web, you can get the content of any accessible URL resource, for example web page, image, PDF document, XML document, REST API etc. It allows you to programmatically download URL's content, including the header. For instance, you might want to obtain (i.e. download) the source code of a web page and its HTTP headers. You can then save such downloaded items into files, analyze them, or do anything else.

<URL> is the resource locator, for example "https://some.web.page.com" or if you are downloading an image (for instance) it could be "https://web.page.com/image.jpg". Anything you can access from a client (such as web browser), you can also obtain programmatically. You can specify any URL parameters, for example "https://some.web.page.com?par1=val1&par2=val2".
Response and headers
The result is obtained via "response" clause into variable <result>, and the length (in bytes) of such response is obtained via "status" clause in <status> variable.

The response code (such as 200 for "OK", 404 for "Not Found" etc.) is available via "response-code" clause in number <response code>; the default value is 0 if response code is unavailable (due to error for instance).

"response-headers" clause allows for retrieval of response headers (such as HTTP headers) in <headers> variable, as a single string variable.
Request method
You can specify the request method using "method" clause. <method> has a string value of the request method, such as "GET", "POST", "PUT", "PATCH", "DELETE" or any other.
Status
In case of error, <status> is negative, and has value of GG_ERR_FAILED (typically indicating system issue, such as lack of memory, library or system issue or local permissions), GG_ERR_WEB_CALL (error in accessing URL or obtaining data) - otherwise <status> is the length in bytes of the response (0 or positive). Optionally, you can obtain the error message (if any) via "error" clause in <error> variable. Error is an empty string ("") if there is no error.
Timeout
If "timeout" clause is specified, call-web will timeout if operation has not completed within <timeout> seconds. If this clause is not specified, the default timeout is 120 seconds. If timeout occurs, <status> will be GG_ERR_WEB_CALL and <error> will indicate timeout. Timeout cannot be negative nor greater than 86400 seconds.
HTTPS and certificates
You can call any valid URL that uses protocol supported by the underlying library (cURL). If you're using https protocol (or any other that requires a SSL/TSL certificate), you can either use the local CA (certificate authority) issued, specify the location of a certificate with "cert" clause, or if you do not want it checked, use "no-cert". By default, the locally installed certificates are used; if the URL you are visiting is not trusted via those certificates, and you still want to visit it, use "no-cert"; and if you do have a no-CA (i.e. self-signed certificate) for that URL, use "cert" to provide it as a file name (either a full path or a name relative to current working directory, see directories).
Cookies
If you'd like to obtain cookies (for example to maintain session or examine their values), use "cookie-jar" clause. <cookie jar> specifies the location of a file holding cookies. Cookies are read from this file (which can be empty or non-existent to begin with) before making a call-web and any changes to cookies are reflected in this file after the call. This way, multiple calls to the same server maintain cookies the same way browser would do. Make sure the same <cookie jar> file is not used across different application spaces, meaning it should be under the application home directory (see directories), which is the most likely method of implementation.
Binary result
The result of call-web (which is <result>) can be a text value or a binary value (for example if getting "JPG", "PNG", "PDF" or other documents). Either way, <status> is the number of bytes in a buffer that holds the value, which is also the value's string-length.
Request body, sending files and arbitrary content
In order to include request body, for instance to send files, use "request-body" clause. Request body is typically used with POST, PUT or PATCH methods. Even though not common, you can also use it with GET, DELETE or any other custom method, such as for example if the resource you wish to identify requires binary data; perhaps a disposable image is used to identify the resource.

- Structured content
Use "fields" and/or "files" subclauses to send a structured body request in the form of name/value pairs, the same as sent from an HTML form. To do that, you can specify fields with "fields" subclause in the form of <field name>=<field value> pairs separated by a comma. For instance, here two fields are set (field "source" with value "web" and field "act" with value "setcookie"):
 call-web "http://website.com/app_name/some_request" response resp response-code rc status len \
     request-body fields "source"="web","act"="setcookie"

To include files, use "files" subclause in the form of <file name>=<file location> separated by commas. For example, here "file1" is the file name sent (which can be anything), and local file "uploadtest.jpg" is the file whose contents is sent; and "file23" is the file name sent (which can be anything),  and "fileup4.pdf" is the actual local file read and sent. In this case files are in the application home directory (see directories), but in general you can specify a relative or absolute path:
 call-web "http://website.com" response resp response-code rc status len \
     request-body files "file1"="uploadtest.jpg", "file23"="fileup4.pdf"

You can specify both "files" and "fields" fields, for instance (along with getting error text and status):
 call-web "http://website.com/app_name/some_request" response resp response-code rc
     request-body fields "source"="web","act"="setcookie" \
         files "file1"="uploadtest.jpg", "file23"="fileup4.pdf" \
     status st error err

There is no limit on the number of files and fields you can specify, other than of the underlying HTTP protocol.

- Non-structured content
To send any arbitrary (non-structured) content in the request body, such as JSON text for example, use "content" subclause:
 call-web "https://website.com" response resp \
     request-headers content-type "application/json" \
     request-body content "{ \
         \"employee\": { \
             \"name\":       \"sonoo\", \
             \"salary\":      56000, \
             \"married\":    true \
         } \
     }"

<content length> number in "content-length" subclause (in "request-headers" clause) can be specified to denote the length of body content:
 read-file "somefile" to file_contents status file_length
 call-web "https://website.com" response resp \
     request-headers content-type "image/jpeg" \
     request-headers content-length file_length \
     request-body content file_contents

If "content-length" is not used, then it is assumed to be the length of string <content>.
Request headers
If your request has a body (i.e. "request-body" clause is used), you can set the content type with "content-type" subclause of a request-headers clause:
 call-web "https://<web address>/resource" \
     request-headers content-type "application/json" \
     request-body content some_json

Note that using "content-type" without the request body may be ignored by the server processing your request or may cause it to consider the request invalid. If "content-type" is not used, the default is "multipart/form-data" if "fields" or "files" subclause(s) are used with "body-request" clause. Otherwise, if you use "content" subclause to send other types of data, you must set content type explicitly via "content-type" subclause of "request-headers" clause.

You can also specify custom request headers with "request-headers" clause, using "custom" subclause with a list of <header name>=<header value> pairs separated by a comma. For example, here custom header "Gliimly-header" has value of "Some_ID", and "Another-Header" a value of "New_ID":
 call-web "http://website.com/<app name>/<request name>?act=get_file" response resp response-code rc status len \
     request-headers custom "Gliimly-header"="Some_ID", "Another-Header"="New_ID"

On the receiving side you can get any such custom header by using "header" clause of the get-req statement:
 get-req header "Gliimly-header" to hvh0
 get-req header "Another-Header" to hvh1

Examples
Get the web page and print it out:
 call-web "https://website.com/page.html" response resp
 p-out resp

Get the "JPG" image from the web and save it to a file "pic.jpg":
 call-web "https://website.com/images/someimg.jpg" status wlen response resp
 write-file "pic.jpg" from resp length wlen

See also
Web
call-web  
out-header  
send-file  
silent-header  
See all
documentation
 CGI

You can run Gliimly application as a CGI (Common Gateway Interface) program, if your web server supports CGI. This is not recommended in general, as CGI programs do not exhibit great performance. However in some cases you may need to use CGI, such as when performance is not of critical importance, or when other methods of execution are not feasible.

To run your application with CGI, use command-line program. Since Gliimly applications require running in the security context of the user who owns the application, you must use "suexec" (or similar feature) of your web server.

The following script sets up an application named "func_test" (any kind of application will do) to run as CGI (after it's been compiled with gg) on Apache web server running on Ubuntu 18 and up. For other web servers/distros, consult their documentation on how to setup CGI for a program.
#prep repos
    sudo apt update

#enable CGI on apache
    sudo a2enmod cgid
    sudo service apache2 restart

#Install suexec-custom for Apache
    sudo apt-get -y install apache2-suexec-custom
    sudo a2enmod suexec
    sudo service apache2 restart

#setup a "gg" directory under cgi-bin where your application can run
    sudo mkdir -p /usr/lib/cgi-bin/gg
    sudo chown $(whoami):$(whoami)  /usr/lib/cgi-bin/gg
    sudo sed -i '1c\/usr/lib/cgi-bin/gg' /etc/apache2/suexec/www-data

#copy your program to "gg" directory
    sudo mv /var/lib/gg/bld/func-test/func-test /usr/lib/cgi-bin/gg
    sudo chown $(whoami):$(whoami)  /usr/lib/cgi-bin/gg/func-test
    sudo chmod 700  /usr/lib/cgi-bin/gg/func-test

#add user/group of Gliim application user to suexec    
    sudo sed -i "/SuexecUserGroup/d" /etc/apache2/sites-enabled/000-default.conf
    sudo sed -i "s/<\/VirtualHost>/SuexecUserGroup $(whoami) $(whoami)\n<\/VirtualHost>/g" /etc/apache2/sites-enabled/000-default.conf
    sudo service apache2 restart

#the application is at http://127.0.0.1/cgi-bin/gg/func-test?...
#substitute 127.0.0.1 for your web address

See also
Running application
application-setup  
CGI  
command-line  
service  
See all
documentation
 Client API

You can use C API client library to connect to Gliimly:
See Examples section below for detailed examples.
Sending a request to Gliimly service
The following function is used to make a call using C API:
 int gg_cli_request (gg_cli *req);

All input and output is contained in a single variable of type "gg_cli", the pointer to which is passed to "gg_cli_request()" function that sends a request to the service. A variable of type "gg_cli" must be initialized to zero before using it (such as with {0} initialization, "memset()" or "calloc()"), or otherwise some of its members may have random values:
 // Define and initialize request variable
 gg_cli req = {0};
 // You could also do:
 // memset ((char*)&req, 0, sizeof(gg_cli));
 ...
 // Set members of 'req' variable (see below)
 ...
 // Make a call
 int result = gg_cli_request (&req);

Type "gg_cli" is defined as (i.e. public members of it):
 typedef struct {
     const char *server; // the IP:port/socket_path to server
     const char *req_method; // request method
     const char *app_path; // application path
     const char *req; // request name
     const char *url_params; // URL params (path+query string)
     const char *content_type; // content type
     int content_len; // content len
     const char *req_body; // request body (i.e. content)
     char **env; // environment to pass to servicde
     int timeout; // timeout for request
     int req_status; // status of request from service
     int data_len; // length of response from service
     int error_len; // length of error from service
     char *errm; // error message when gg_cli_request returns other than GG_OKAY
     gg_cli_out_hook out_hook; // hook to get data output as soon as it arrives
     gg_cli_err_hook err_hook; // hook get error output as soon as it arrives
     gg_cli_done_hook done_hook; // get all data when request is complete
     int thread_id; // custom ID when executing in a multithreaded fashion
     volatile char done; // indicator that the request has completed
     int return_code; // the return code from gg_cli_request()
 } gg_cli;


- Mandatory input
The following members of "gg_cli" type must be supplied in order to make a call to a service:
- URL parameters
"url_params" is the URL parameters, meaning input parameters (as path segments and query string, see request). URL parameters can be NULL or empty, in which case it is not used.

- Request body (content)
"req_body" is the request body, which can be any text or binary data. "content_type" is the content type of request body (for instance "application/json" or "image/jpg"). "content_len" is the length of request body in bytes. A request body is sent only if "content_type" and "req_body" are not NULL and not empty, and if "content_len" is greater than zero.

- Passing environment to service
"env" is any environment variables that should be passed along to the service. You can access those in Gliimly via "environment" clause of get-sys statement. This is an array of strings, where name/value pairs are specified one after the other, and which always must end with NULL. For example, if you want to use variable "REMOTE_USER" with value "John" and variable "MY_VARIABLE" with value "8000", then it might look like this:
 char *env[5];
 env[0] = "REMOTE_USER";
 env[1] = "John"
 env[2] = "MY_VARIABLE";
 env[3] = "8000"
 env[4] = NULL;

Thus, if you are passing N environment variables to the service, you must size "env" as "char*" array with 2*N+1 elements.

Note that in order to suppress output of HTTP headers from the service, you can include environment variable "GG_SILENT_HEADER" with value "yes"; to let the service control headers output (either by default, with "-z" option of mgrg or with silent-header) simply omit this environment variable.

- Timeout
"timeout" is the number of seconds a call to the service should not exceed. For instance if the remote service is taking too long or if the network connection is too slow, you can limit how long to wait for a reply. If there is no timeout, then "timeout" value should be zero. Note that DNS resolution of the host name (in case you are using a TCP socket) is not counted in timeout. Maximum value for timeout is 86400.

Even if timeout is set to 0, a service call may eventually timeout due to underlying socket and network settings. Note that even if your service call times out, the actual service executing may continue until it's done.
- Thread ID
"thread_id" is an integer that you can set and use when your program is multithreaded. By default it's 0. This number is set by you and passed to hooks (your functions called when request is complete or data available). You can use this number to differentiate the data with regards to which thread it belongs to.

- Completion indicator and return code
When your program is multithreaded, it may be useful to know when (and if) a request has completed. "done" is set to to "true" when a request completes, and "return_code" is the return value from gg_cli_request() (see below for a list). In a single-threaded program, this information is self-evident, but if you are running more than one request at the same time (in different threads), you can use these to check on each request executing in parallel (for instance in a loop in the main thread).

Note that "done" is "true" specifically when all the results of a request are available and the request is about to be completed. In a multithreaded program, it means the thread is very soon to terminate or has already terminated; it does not mean that thread has positively terminated. Use standard "pthread_join()" function to make sure the thread has terminated if that is important to you.
Return value of gg_cli_request()
The following are possible return values from "gg_cli_request()" (available in "return_code" member of "gg_cli" type):
You can obtain the error message (corresponding to the above return values) in "errm" member of "gg_cli" type.
Server reply
The service reply is split in two. One part is the actual result of processing (called "stdout" or standard output), and that is "data". The other is the error messages (called "stderr" or standard error), and that's "error". All of service output goes to "data", except from report-error and pf-out/pf-url/pf-web (with "to-error" clause) which goes to "error". Note that "data" and "error" streams can be co-mingled when output by the service, but they will be obtained separately. This allows for clean separation of output from any error messages.

You can obtain service reply when it's ready in its entirety (likely most often used), or as it comes alone bit by bit (see more about asynchronous hooks futher here).
Status of request execution
"req_status" member of "gg_cli" type is the request status when a request had executed; it is somewhat similar to an exit status of a program. A Gliimly service request returns status by means of handler-status statement. Note that "req_status" is valid only if "gg_cli_request()" returned GG_OKAY (or if "return_code" is GG_OKAY for multi-threaded programs).
Getting data reply (stdout)
Data returned from a request is valid only if "gg_cli_request()" returned GG_OKAY (or if "return_code" is GG_OKAY for multi-threaded programs). In that case, use "gg_cli_data()" function, for example:
 // Declare and initialize request variable
 gg_cli req = {0};
 // Setup the req variable
 ...
 // Execute request
 if (gg_cli_request (&req) == GG_OKAY) {
     char *data = gg_cli_data (req); // data response
     int data_len = req->data_len; // length of data response in bytes
 }

"data_len" member of "gg_cli" type will have the length of data response in bytes. The reply is always null-terminated as a courtesy, and "data_len" does not include the terminating null byte.

"gg_cli_data()" returns the actual response (i.e. data output) from service as passed to "data" stream. Any output from service will go there, except when "to-error" clause is used in pf-out, pf-url and pf-web - use these constructs to output errors without stopping the service execution. Additionaly, the output of report-error will also not go to data output.
Getting error reply (stderr)
An error reply returned from a service is valid only if "gg_cli_request()" returned GG_OKAY (or if "return_code" is GG_OKAY for multi-threaded programs). In that case, use "gg_cli_error()" function, for example:
 // Declare and initialize request variable
 gg_cli req = {0};
 // Setup the req variable
 ...
 // Execute request
 if (gg_cli_request (&req) == GG_OKAY) {
     char *err = gg_cli_error (req); // error response
     int err_len = req->error_len; // length of error response in bytes
 }

"gg_cli_error()" returns any error messages from a service response, i.e. data passed to "error" stream. It is comprised of any service output when "to-error" clause is used in pf-out, pf-url and pf-web, as well as any output from report-error.

"error_len" member (of "gg_cli" type above) will have the length of error response in bytes. The response is always null-terminated as a courtesy, and "error_len" does not include the terminating null byte.
Freeing the result of a request
Once you have obtained the result of a request, and when no longer needed, you should free it by using "gg_cli_delete()":
 // Declare and initialize request variable
 gg_cli req = {0};
 // Setup the req variable
 ...
 // Execute request
 gg_cli_request (&req);
 // .. Use the result ..
 // Free request output (data and error streams)
 gg_cli_delete (&req);

If you do not free the result, your program may experience a memory leak. If your program exits right after issuing any request(s), you may skip freeing results as that is automatically done on exit by the Operating System.

You can use "gg_cli_delete()" regardless of whether "gg_cli_request()" returned GG_OKAY or not.
Completion hook
A function you wrote can be called when a request has completed. This is useful in multithreaded invocations, where you may want to receive complete request's results as they are available. To specify a completion hook, you must write a C function with the following signature and assign it to "done_hook" member of "gg_cli" typed variable:
 typedef void (*gg_cli_done_hook)(char *recv, int recv_len, char *err, int err_len, gg_cli *req);

"recv" is the request's data output, "recv_len" is its length in bytes, "err" is the request's error output, and "err_len" is its length in bytes. "req" is the request itself which you can use to obtain any other information about the request. In a single threaded environment, these are available as members of the request variable of "gg_cli" type used in the request, and there is not much use for a completion hook.

See an example with asynchronous hooks.
Asynchronous hooks
You can obtain the service's reply as it arrives by specifying read hooks. This is useful if the service supplies partial replies over a period of time, and your application can get those partial replies as they become available.

To specify a hook for data output (i.e. from stdout), you must write a C function with the following signature and assign it to "out_hook":
 typedef void (*gg_cli_out_hook)(char *recv, int recv_len, gg_cli *req);

"recv" is the data received and "recv_len" is its length.

To specify a hook for error output (i.e. from stderr), you must write a C function with the following signature and assign it to "err_hook":
 typedef void (*gg_cli_err_hook)(char *err, int err_len, gg_cli *req);

"err" is the error received and "err_len" is its length.

"req" (in both hooks) is the request itself which you can use to obtain any other information about the request.

To register these functions with "gg_cli_request()" function, assign their pointers to "out_hook" and "err_hook" members of request variable of type "gg_cli" respectively. Note that the output hook (i.e. hook function of type "gg_cli_out_hook") will receive empty string ("") in "recv" and "recv_len" will be 0 when the request has completed, meaning all service output (including error) has been received.

For example, functions "get_output()" and "get_err()" will capture data as it arrives and print it out, and get_complete() will print the final result:
 // Output hook
 void get_output(char *d, int l, gg_cli *req)
 {
     printf("Got output of [%.*s] of length [%d] in thread [%d]", l, d, l, req->thread_id);
 }

 // Error hook
 void get_err(char *d, int l, gg_cli *req)
 {
     printf("Got error of [%.*s] of length [%d], status [%d]", l, d, l, req->req_status);
 }

 // Completion hook
 void get_complete(char *data, int data_len, char *err, int err_len, gg_cli *req)
 {
     printf("Got data [%.*s] of length [%d] and error of [%.*s] of length [%d], status [%d], thread [%d]\n", data_len, data, data_len, err_len, err, err_len, req->req_status, req->thread_id);
 }

 ...

 gg_cli req = {0};
 ...
 // Register output and error hooks, as well as a completion hook
 req.out_hook = &get_output;
 req.err_hook = &get_err;
 req.done_hook = &get_complete;

Multithreading
The Gliimly client is MT-safe, meaning you can use it both in single-threaded and multi-threaded programs. Note that each thread must have its own copy of "gg_cli" request variable, since it provides both input and output parameters to a request call and as such cannot be shared between the threads.
Usage
Do not use this API directly with Gliimly - use call-remote instead which is made specifically for use in .gliim files. Otherwise, you can use this API with any program.
Using API without Gliimly
You can use API without installing Gliimly. To do that:
Note that you do not need to install any other dependencies, as API is entirely contained in the aforementioned source files.
Examples
Simple example
The following example is a simple demonstration, with minimum of options used. Copy the C code to file "cli.c" in a directory of its own:
 #include "gcli.h"

 void main ()
 {
     // Request type gg_cli - create a request variable and zero it out
     gg_cli req = {0};

     req.server = "/var/lib/gg/helloworld/sock/sock"; // Unix socket
     req.req_method = "GET"; // GET HTTP method
     req.app_path = "/helloworld"; // application path
     req.req = "/hello-simple"; // request name

     // Make a request
     int res = gg_cli_request (&req);

     // Check return status, and if there's an error, display error code and the
     // corresponding error message. Otherwise, print out service response.
     if (res != GG_OKAY) printf("Request failed [%d] [%s]\n", res, req.errm);
     else printf("%s", gg_cli_data(&req));

     // Free up resources so there are no memory leaks
     gg_cli_delete(&req);
 }

To make this client application:
gcc -o cli cli.c $(gg -i)

In this case, you're using a Unix socket to communicate with the Gliimly service. To test with a Gliimly service handler, copy the following code to "hello_simple.gliim" file in a separate directory:
 begin-handler /hello_simple public
    silent-header
    @Hi there!
 end-handler

Create and make the Gliimly application and run it via local Unix socket:
sudo mgrg -i -u $(whoami) helloworld
gg -q
mgrg -m quit helloworld
mgrg -w 1 helloworld

Run the client:
./cli

The output is, as expected:
 Hi there!

Example with more options
This example demonstrates using multiple options, including using TCP sockets connecting to a host and port number, environment variables, query string, request body and request execution timeout. It will also show the separation of "data" and "error" (i.e. stdout and stderr) streams from the service.

Copy this to file "cli1.c" in a directory of its own - note that in this example a server will run on localhost (127.0.0.1) and TCP port 2301:
 #include "gcli.h"

 void main ()
 {
     // Request type gg_cli - create a request variable
     gg_cli req;
     // Initialize request
     memset (&req, 0, sizeof(req));

     // Add 3 environment variables (in the form of name, value, name, value, ... , NULL)
     char *env[] = { "REMOTE_USER", "John", "SOME_VAR", "SOME\nVALUE", "NEW_VAR", "1000", NULL };

     // Create a request
     // Environment variables to pass to service request
     req.env = env;
     // Server IP and port
     req.server = "127.0.0.1:2301";
     // Request method
     req.req_method = "GET";
     // Application path
     req.app_path = "/helloworld";
     // Request
     req.req = "/hello";
     // URL parameters (path and query string)
     req.url_params = "par1=val1&par2=91";
     // Content type
     req.content_type = "application/json";
     // Content (i.e. request body)
     req.req_body = "This is request body";
     // Content length
     req.content_len = strlen (req.req_body);
     // No timeout (set to 0)
     req.timeout = 0;

     // Make a request
     int res = gg_cli_request (&req);

     // Check return status, and if there's an error, display error code and the
     // corresponding error message
     if (res != GG_OKAY) printf("Request failed [%d] [%s]\n", res, req.errm);
     else {
        // If successful, display request results
        // Exit code from the service processing
        printf("Server status %d\n", req.req_status);
        // Length of reply from service
        printf("Len of data %d\n", req.data_len);
        // Length of any error from service
        printf("Len of error %d\n", req.error_len);
        // Reply from service
        printf("Data [%s]\n", gg_cli_data(&req));
        // Any error message from service
        printf("Error [%s]\n", gg_cli_error(&req));
     }

     // Free up resources so there are no memory leaks
     gg_cli_delete(&req);
 }

Note that the URL parameters (i.e. "req.url_params") could have been written as a combination of a path segment and query string (see request):
 req.url_params = "/par1/val1?par2=91";

or just as a path segment:
 req.url_params = "/par1=val1/par2=91";

To make this client application:
gcc -o cli1 cli1.c $(gg -i)

To test it, you can create a Gliimly application. Copy this to "hello.gliim" file in a separate directory:
 begin-handler /hello public
     silent-header

     // Get request body
     request-body rb

     // Input params
     get-param par1
     get-param par2

     // Get environment variables passed on from the client
     get-sys environment "REMOTE_USER" to ruser
     get-sys environment "SOME_VAR" to somev
     get-sys environment "NEW_VAR" to newv

     // Output, print the environment variables, the PID of server process and the request body received from the client
     get-req process-id to pid
     @Hello World! [<<p-out ruser>>] [<<p-out somev>>] [<<p-out newv>>] [<<p-out par1>>] [<<p-out par2>>] <<p-num pid>> <<p-out rb>>

     // Output back a number of lines, generally as "Output line #<num of line>"
     // After line #1418, print out "Line 1419 has an error" to stderr
     // After line #4418, report an error and exit
     // This demostrates outputting data to both stdout and stderr
     start-loop repeat 8000 use i start-with 0
         @Output line #<<p-num i>>
         if-true i equal 1419
             pf-out "Line %ld has an error\n", i to-error
         end-if
         if-true i equal 4419
             // Exit code of the service execution
             handler-status 82
             report-error "%s", "Some error!"
         end-if
     end-loop
 end-handler

Create and make the Gliimly application and run it on local TCP port 2301 to match the client above:
sudo mgrg -i -u $(whoami) helloworld
gg -q
mgrg -m quit helloworld
mgrg -w 1 -p 2301 helloworld

Run the client:
./cli1

The output:
Server status 82
Len of data 78530
Len of error 35
Data [Hello World! [John] [SOME
VALUE] [1000] [val1] [91] 263002 This is request body
Output line #0
Output line #1
Output line #2
Output line #3
Output line #4
Output line #5
Output line #6
Output line #7

...
Output line #4413
Output line #4414
Output line #4415
Output line #4416
Output line #4417
Output line #4418
Output line #4419
]
Error [Line 1419 has an error
Some error!
]

The output shows service exit code (82, see handler-status in the Gliimly code above), length of data output, and other information which includes environment variables passed to the service from the client, the PID of server process, the request body from the client, and then the error output. Note that the data output (stdout) and the error output (stderr) are separated, since the protocol does use separate streams over the same connection. This makes working with the output easy, while the data transfer is fast at the same time.
See also
API
Client-API  
Server-API  
See all
documentation
 Close file

Purpose: Close file.

 close-file file-id <file id> \
     [ status <status> ]

close-file closes file <file id> previously opened with open-file, where <file id> is an open file identifier.

You can obtain the status of file closing via <status> number (in "status" clause). The <status> is GG_OKAY if file is closed, or GG_ERR_CLOSE if could not close file.

If you do not close a file opened with open-file, Gliimly will automatically close it when the request ends.
Examples
See open-file.
See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 Code blocks

Code blocks
Use curly braces ("{" and "}") to open and close a code block. They create a separate scope for previously non-existing variables defined within them, which begins with "{" and ends with "}". Note that if a variable existed in an outer scope, it cannot be created in the inner scope.

Note that if-true, run-query, start-loop and read-line statements contain implicit code blocks, meaning the code between them and the accompanying end-statement is within implicit "{" and "}".
Examples
The following code will first print out "outside" and then "inside" twice, illustrating the fact that variable "s1" is defined only in the outer scope once. Variable "s2" exists only in inner scope:
 begin-handler /scope public
     @<<p-out s1>>
     set-string s1="outside"
     {
         set-string s2="inner variable"
         set-string s1="inside"
         @<<p-out s1>>
     }
     @<<p-out s1>>
 end-handler

See also
Program flow
break-loop  
code-blocks  
continue-loop  
do-once  
exit-handler  
if-defined  
if-true  
set-bool  
start-loop  
See all
documentation
 Command line

A Gliimly application can run as a web application or a command-line program, or both - such as when some requests can be either fulfilled through web interface or executed otherwise (such as on the command line). Note that Gliimly produces two separate executables: a service one and a command-line one - they are different because command-line program does not need the service library and thus is smaller.

The name of the command-line executable is the same as the application name, and its path is (assuming <app name> is the application name):
/var/lib/gg/bld/<app name>/<app name>

Output
A command-line program works the same way as a service executable, and the output is the same, except that it is directed to stdout (standard output) and stderr (standard error).
Exit code
To specify the exit code of a command-line program, use handler-status. To exit the program, use exit-handler, or otherwise the program will exit when it reaches the end of a request.
Executing a request
Here is how to execute a request "add-stock" in application "stock" with parameters "name" having a value of "ABC" and "price" a value of "300":
gg -r --app="/stock" --req="/add-stock?name=ABC&price=300" --exec

Note that you if specify parameters as part of the path, you could write the above the same way as in a URL:
gg -r --app="/stock" --req="/add-stock/name=ABC/price=300" --exec

You can generate the shell code to execute the above without using gg by omitting "--exec" option, for instance:
gg -r --app="/stock" --req="/add-stock/name=ABC/price=300"

Including a request body
You can include a request body when executing a singe-run program. It is always included as the standard input (stdin) to the program.

You must provide the length of this input and the type of input, as well as a request method (such as POST, PUT, PATCH, GET, DELETE or any other).

Here is an example of using a request body to make a POST request on the command line - the application name is "json" and request name is "process". File "prices.json" is sent as request body:
gg -r --app=/json --req='/process?act=get_total&period=YTD' --method=POST --content=prices.json --content-type=application/json --exec

You can generate the shell code for the above by omitting "--exec" option of gg utility.

Note that you can also include any other headers as environment variables by using the "HTTP_" convention, where custom headers are capitalized with use of underscore for dashes and prefixed with "HTTP_", for example header "Gliimly-Header" would be set as:
export HTTP_GLIIMLY_HEADER="some value"

You would set the "HTTP_" variable(s) prior to executing the program.
Suppressing HTTP header output for the entire application
If you wish to suppress the output of HTTP headers for all requests, use "--silent-header" option in "gg -r":
gg -r --app="/stock" --req="/add-stock/name=ABC/price=300" --exec --silent-header

This will suppress the output of HTTP headers (either the default or with out-header), or for any other case where headers are output. This has the same effect as silent-header, the only difference is that the environment variable applies to the entire application.
URL-encoding the input
Any data in "--req" option (and consequently in PATH_INFO or QUERY_STRING environment vairables if calling directly from shell) must be formatted to be a valid URL; for example, data that contains special characters (like "&" or "?") must be URL-encoded, for instance:
gg -r --app="/stock" --req="/add-stock/name=ABC%3F/price=300"

In this case, parameter "name" has value of "ABC?", where special character "?" is encoded as "%3F".

To make sure all your input parameters are properly URL-encoded, you can use Gliimly's v1 code processor:
$($(gg -l)/v1 -urlencode '<your data>')

For instance, to encode "a?=b" as a parameter:
gg -r --app="/stock" --req="/add-stock/name=$($(gg -l)/v1 -urlencode 'AB?')/price=300"

If your parameters do not contain characters that need URL encoding, then you can skip this.
CGI
You can also use a command-line program with CGI (Common Gateway Interface).
See also
Running application
application-setup  
CGI  
command-line  
service  
See all
documentation
 Commit transaction

Purpose: Commits a database transaction.

 commit-transaction [ @<database> ] \
     [ on-error-continue | on-error-exit ] \
     [ error <error> ] [ error-text <error text> ] \
     [ options <options> ]

Database transaction started with begin-transaction is committed with commit-transaction.

<options> (in "options" clause) is any additional options to send to database you wish to supply for this functionality.

Once you start a transaction with begin-transaction, you must either commit it with commit-transaction or rollback with rollback-transaction. If you do neither, your transaction will be rolled back once the request has completed and your program will stop with an error message. This is because opening a transaction and leaving without committing or a rollback is a bug in your program.  

You must use begin-transaction, commit-transaction and rollback-transaction instead of calling this functionality through run-query.
Database
<database> is specified in "@" clause and is the name of the database-config-file. If ommited, your program must use exactly one database (see --db option in gg).
Error handling
The error code is available in <error> variable in "error" clause - this code is always "0" if successful. The <error> code may or may not be a number but is always returned as a string value. In case of error, error text is available in "error-text" clause in <error text> string.

"on-error-continue" clause specifies that request processing will continue in case of an error, whereas "on-error-exit" clause specifies that it will exit. This setting overrides database-level db-error for this specific statement only. If you use "on-error-continue", be sure to check the error code.

Note that if database connection was lost, and could not be reestablished, the request will error out (see error-handling).
Examples
 begin-transaction @mydb
 run-query @mydb="insert into employee (name, dateOfHire) values ('Terry', now())"
 run-query @mydb="insert into payroll (name, salary) values ('Terry', 100000)"
 commit-transaction @mydb

See also
Database
begin-transaction  
commit-transaction  
current-row  
database-config-file  
db-error  
mariadb-database  
postgresql-database  
rollback-transaction  
run-query  
sqlite-database  
See all
documentation
 Connect apache tcp socket

This shows how to connect your application listening on TCP port <port number> (started with "-p" option in mgrg) to Apache web server.

- Step 1:
To setup Apache as a reverse proxy and connect your application to it, you need to enable FastCGI proxy support, which generally means "proxy" and "proxy_fcgi" modules - this is done only once:
- Step 2:
Edit the Apache configuration file:
Add this to the end of file ("/<app path>" is the application path, see request):
ProxyPass "/<app path>/" fcgi://127.0.0.1:<port number>/

- Step 3:
Finally, restart Apache. On Debian systems (like Ubuntu) or OpenSUSE:
sudo systemctl restart apache2

On Fedora systems (like RedHat) and Arch Linux:
sudo systemctl restart httpd

Note: you must not have any other URL resource that starts with "/<app path>/" (such as for example "/<app path>/something") as the web server will attempt to pass them as a reverse proxy request, and they will likely not work. If you need to, you can change the application path to be different from "/<app path>", see request.
See also
Web servers
connect-apache-tcp-socket  
connect-apache-unix-socket  
connect-haproxy-tcp-socket  
connect-nginx-tcp-socket  
connect-nginx-unix-socket  
See all
documentation
 Connect apache unix socket

This shows how to connect your application listening on a Unix socket (started with mgrg) to Apache web server.

- Step 1:
To setup Apache as a reverse proxy and connect your application to it, you need to enable FastCGI proxy support, which generally means "proxy" and "proxy_fcgi" modules - this is done only once:
- Step 2:
Edit the Apache configuration file:
Add this to the end of file ("/<app path>" is the application path (see request) and "<app name>" is your application name):
ProxyPass "/<app path>/" unix:///var/lib/gg/<app name>/sock/sock|fcgi://localhost/<app path>

- Step 3:
Finally, restart Apache. On Debian systems (like Ubuntu) or OpenSUSE:
sudo systemctl restart apache2

On Fedora systems (like RedHat) and Arch Linux:
sudo systemctl restart httpd

Note: you must not have any other URL resource that starts with "/<app path>/" (such as for example "/<app path>/something") as the web server will attempt to pass them as a reverse proxy request, and they will likely not work. If you need to, you can change the application path to be different from "/<app path>", see request.
See also
Web servers
connect-apache-tcp-socket  
connect-apache-unix-socket  
connect-haproxy-tcp-socket  
connect-nginx-tcp-socket  
connect-nginx-unix-socket  
See all
documentation
 Connect haproxy tcp socket

This shows how to connect your application listening on TCP port <port number> (started with "-p" option in mgrg) to HAProxy load balancer.

HAProxy can balance the load between different web servers, which in turn are connected to your applications. Since in this case HAProxy does not directly communicate with a Gliimly application (which is behind a web server), you may lookup examples of this online.

When you want HAProxy to directly communicate with a Gliimly application server, you may use configuration similar to this (shown is just a bare-bone setup needed to accomplish the goal):
global
    user haproxy
    group haproxy
    daemon

defaults
    mode    http
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend front_server
    mode http
    bind *:90
    use_backend backend_servers if { path_reg -i ^.*\/func_test\/.*$ } 
    option forwardfor

fcgi-app gliim-fcgi
    log-stderr global
    docroot /var/lib/gg/func_test/app
    path-info ^.+(/func_test)(/.+)$ 

backend backend_servers                                                                                                                     
    mode http
    filter fcgi-app gliim-fcgi
    use-fcgi-app gliim-fcgi
    server s1 127.0.0.1:2301 proto fcgi

Restart HAProxy:
sudo systemctl restart haproxy

Note that Gliimly application path is "/func_test" (and the application name may or may not be the same, see request). The TCP port of the application is "2301" (could be any port number you choose that's greater than 1000 and lower than 65535).

HAProxy itself is bound to port 90, and "path_reg" specifies which URLs will be passed to your Gliimly application (i.e. they must have "/func_test/" in the URL). "path-info" specifies SCRIPT_NAME and PATH_INFO (as "()" regular sub-expressions), which are as such passed to your Gliimly application. "docroot" is set to the application home directory (see directories).

A Gliimly aplication (named "func_test") would have been started with (using the same application name "func_test" and TCP port "2301"):
mgrg -p 2301 func_test

Now you should be able to connect and load-balance your Gliimly application servers directly from HAProxy.
See also
Web servers
connect-apache-tcp-socket  
connect-apache-unix-socket  
connect-haproxy-tcp-socket  
connect-nginx-tcp-socket  
connect-nginx-unix-socket  
See all
documentation
 Connect nginx tcp socket

This shows how to connect your application listening on TCP port <port number> (started with "-p" option in mgrg) to Nginx web server.

- Step 1:
You will need to edit the Nginx configuration file. For Ubuntu and similar:
sudo vi /etc/nginx/sites-enabled/default

while on Fedora and other systems it might be at:
sudo vi /etc/nginx/nginx.conf


Add the following in the "server {}" section ("/<app path>" is the application path, see request):
location /<app path>/ { include /etc/nginx/fastcgi_params; fastcgi_pass  127.0.0.1:<port number>; }

- Step 2:
Finally, restart Nginx:
sudo systemctl restart nginx

Note: you must not have any other URL resource that starts with "/<app path>/" (such as for example "/<app path>/something") as the web server will attempt to pass them as a reverse proxy request, and they will likely not work. If you need to, you can change the application path to be different from "/<app path>", see request.
See also
Web servers
connect-apache-tcp-socket  
connect-apache-unix-socket  
connect-haproxy-tcp-socket  
connect-nginx-tcp-socket  
connect-nginx-unix-socket  
See all
documentation
 Connect nginx unix socket

This shows how to connect your application listening on a Unix socket (started with mgrg) to Nginx web server.

- Step 1:
You will need to edit the Nginx configuration file. For Ubuntu and similar:
sudo vi /etc/nginx/sites-enabled/default

while on Fedora and other systems it might be at:
sudo vi /etc/nginx/nginx.conf


Add the following in the "server {}" section ("/<app path>" is the application path (see request) and "<app name>" is your application name):
location /<app path>/ { include /etc/nginx/fastcgi_params; fastcgi_pass  unix:///var/lib/gg/<app name>/sock/sock; }

- Step 2:
Finally, restart Nginx:
sudo systemctl restart nginx

Note: you must not have any other URL resource that starts with "/<app path>/" (such as for example "/<app path>/something") as the web server will attempt to pass them as a reverse proxy request, and they will likely not work. If you need to, you can change the application path to be different from "/<app path>", see request.
See also
Web servers
connect-apache-tcp-socket  
connect-apache-unix-socket  
connect-haproxy-tcp-socket  
connect-nginx-tcp-socket  
connect-nginx-unix-socket  
See all
documentation
 Continue loop

Purpose: Continue to the top of a loop.

 continue-loop

continue-loop will continue execution at the top of the loop at start-loop, run-query, or read-line statements.
Examples
Skip the processing if it's the loop #300:
 define-number cont_loop = 300
 start-loop repeat 1000 use i start-with 1
     if-true i equal cont_loop
         continue-loop
     end-if
     @Completed <<p-num i>> loops so far
 end-loop

See also
Program flow
break-loop  
code-blocks  
continue-loop  
do-once  
exit-handler  
if-defined  
if-true  
set-bool  
start-loop  
See all
documentation
 Copy file

Purpose: Copies one file to another.

 copy-file <source file> to <target file> [ status <status> ]

File <source file> is copied into <target file>, which is created if it does not exist.

Status can be obtained in <status> variable, which is GG_ERR_OPEN if cannot open source file, GG_ERR_CREATE if cannot create target file, GG_ERR_READ if cannot read source file, GG_ERR_WRITE if cannot write target file, or number of bytes copied (including 0) on success.
Examples
 copy-file "/home/user/source_file" to "/home/user/target_file" status st

See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 Copy string

Purpose: Copies string to another string.

 copy-string <source string> to <dest string> \
     [ start-with <start with> ] \
     [ length <length> ]

Use copy-string to copy <source string> to <dest string>.

<start with> number (in "start-with" clause) is the position in <source string> to start copying from, with 0 being the first byte.

Without "length" clause, the whole of <source string> is copied. With "length" clause, exactly <length> bytes are copied into <dest string>.

You can copy a string to itself. In this case, the original string remains and the new string references a copy:
 set-string str = "original string" // string to change

 set-string orig = str // references original copy of the string to change

 copy-string str to str // make a copy of string to change and assign it to itself

 upper-string str // change the copy

 // Now "str" references "ORIGINAL STRING" 
 // and "orig" references "original string"

Examples
After copy-string below, "my_str" will be a copy of string "some value":
 set-string other_string="some value"
 copy-string other_string to my_str

Copy certain number of bytes, the result in "my_str" will be "ome":
 set-string other_string="some value"
 copy-string other_string to my_str length 3 start-with 1

See also
Strings
copy-string  
count-substring  
delete-string  
lower-string  
read-split  
replace-string  
set-string  
split-string  
string-length  
trim-string  
upper-string  
write-string  
See all
documentation
 Count substring

Purpose: Count substrings.

 count-substring <substring> in <string> to <count> [ case-insensitive [ <case insensitive> ] ]

count-substring counts the number of occurrences of <substring> in <string> and stores the result in <count> (specified in "to" clause). By default, search is case-sensitive. If you use "case-insensitive" clause without boolean variable <case insensitive>, or if <case insensitive> evaluates to true, then the search is case-insensitive.

If <substring> is empty (""), <count> is 0.
Examples
In the following example, 1 occurrence will be found after the first count-substring, and 2 after the second (since case insensitive search is used there):
 set-string sub = "world"
 set-string str = "Hello world and hello World!"

 count-substring sub in str to num_occ
 pf-out "Found %ld occurrences!\n", num_occ

 count-substring sub in str to num_occ case-insensitive
 pf-out "Found %ld occurrences!\n", num_occ

See also
Strings
copy-string  
count-substring  
delete-string  
lower-string  
read-split  
replace-string  
set-string  
split-string  
string-length  
trim-string  
upper-string  
write-string  
See all
documentation
 Current row

Purpose: Get or print out the row number of a current row in the result-set of a query.

 current-row [ to <current row> ]

Without "to" clause, current-row will print out the current row number. First row is numbered 1. With "to" clause, the row number is stored into variable <current row>. current-row must be within a run-query loop, and it always refers to the most inner one.
Examples
Display row number before a line with first and last name for each employee:
 run-query @mydb="select firstName, lastName from employee" output firstName, lastName
     @Row #<<current-row>><br/>
     p-out firstName
     @,
     p-out lastName
     @<br/>
 end-query

See also
Database
begin-transaction  
commit-transaction  
current-row  
database-config-file  
db-error  
mariadb-database  
postgresql-database  
rollback-transaction  
run-query  
sqlite-database  
See all
documentation
 Database config file

Gliimly application can use any number of databases, with each specified by a database configuration file. This file provides database name, login and connection settings and preferences.

When making a Gliimly application, you specify a database vendor and the database configuration file for each database your application uses (see gg), for instance:
 gg ... --db="mariadb:db1 postgres:db2 sqlite:db3"  ...

in which case there are three database configuration files (db1, db2 and db3), with db1 being MariaDB, db2 being PostgreSQL and db3 being SQLite database.

You must create a database configuration file for each database your application uses, and this file must be placed with your source code. Such file will be copied to locations specified and used by Gliimly to connect to database(s) (see directories).

Each such file contains connection information and authentication to a database, which Gliimly uses to login. The names of these configuration files are used in queries. There is no limit on how many databases can be used in your application and those from different vendors can be used in the same application.

An example of database configuration file (in this case MariaDB):
[client]
user=mydbuser
password=somepwd
database=mydbname
protocol=TCP
host=127.0.0.1
port=3306

Using in your queries
Database statements that perform queries (such as run-query) must specify the database configuration file used, unless your application uses only a single database. Such configuration is given by "@<database config file>" (for instance in run-query or begin-transaction). For example, in:
 run-query @mydb="select name from employees"
 ...
 end-query

the query is performed on a database specified by the configuration file "mydb", as in (assuming it's PostgreSQL database):
 gg ... --db="postgres:mydb"  ...

You do not need to manually connect to the database; when your application uses it for the first time, a connection is automatically established, and lost connection is automatically re-established when needed.

If a database is the only one used by your application, you can omit it, as in:
 run-query ="select name from employees"
 ...
 end-query

Connection settings
The contents of a configuration file depends on the database used:
Substituting environment variables
You can use environment variables in database configuration files by means of substitution, in the form of "${VAR_NAME}". For example in file "mydb":
[client]
user=${DB_USER}
password=${DB_PWD}
database=${DB_NAME}
protocol=TCP
host=127.0.0.1
port=${DB_PORT}

Here, environment variables DB_USER, DB_PWD, DB_NAME and DB_PORT are used. They must be defined in the shell environment prior to calling gg to make your application (if not defined the value will be empty):
#Define environment variables for a configuration file
export DB_USER="my_user"
export DB_PWD="my_password"
export DB_NAME="my_database"
export DB_PORT="3307"

#Make application using the above database configuration file with the environment variables specified
gg -q --db=mariadb:mydb

which results in file /var/lib/gg/<app name>/app/db/mydb:
[client]
user=my_user
password=my_password
database=my_database
protocol=TCP
host=127.0.0.1
port=3307

Besides making application deployment easier, this also adds to its security as the information such as above (including the database password) does not need to be a part of source code and reside in source code control system (such as git).

Your environment variables can have any names, except that they cannot start with an underscore ("_") or be prefixed by "GG_" or "GG_", because those variable names are reserved by Gliimly.

Note that if your data actually has a dollar sign and is a part of the configuration file, then you can create a variable for it:
export DOLLAR_SIGN='$'

and in the configuration file:
..
database=my${DOLLAR_SIGN}database
..

In this case the database name is "my$database".
See also
Database
begin-transaction  
commit-transaction  
current-row  
database-config-file  
db-error  
mariadb-database  
postgresql-database  
rollback-transaction  
run-query  
sqlite-database  
See all
documentation
 Db error

Purpose: Either exit request or continue processing when there is an error in a database statement.

 db-error [ @<database> ] ( exit | continue )

db-error sets the response to the failure of database statements. You can change this response at run-time with each execution of db-error.

When a database statement (like run-query) fails, Gliimly will either exit request processing if "exit" is used, or continue if "continue" is used. "Exiting" is equivalent to calling report-error with the message containing details about the error. "Continuing" means that your program will continue but you should examine error code (see for instance "error" clause in run-query).

The default action is "exit". You can switch back and forth between "exit" and "continue". Typically, "exit" is preferable because errors in database statemets generally mean application or setup issues, however "continue" may be used when application wants to attempt to recover from errors or perform other actions.

Note that you can override the effect of db-error for a specific database statement by using clauses like "on-error-continue" and "on-error-exit" in run-query.
Database
<database> is specified in "@" clause and is the name of the database-config-file. If ommited, your program must use exactly one database (see --db option in gg).
Examples
The following will not exit when errors happen going forward, but rather continue execution (and you should check every error henceforth):
 db-error @mydb continue

See also
Database
begin-transaction  
commit-transaction  
current-row  
database-config-file  
db-error  
mariadb-database  
postgresql-database  
rollback-transaction  
run-query  
sqlite-database  
Error handling
db-error  
error-code  
error-handling  
report-error  
See all
documentation
 Debugging

Tracing and Backtrace file
To see any errors reported by Gliimly, use -e option of gg and check backtrace file. For example, to see the last 3 error messages:
gg -e 3

You can use trace-run statement to create run-time traces (see directories for directory location). To quickly find the location of recently written-to trace files, use -t option of gg, for example for 5 most recently used trace files:
gg -t 5

Use get-req to get the trace file location at run-time from your application.
Output from Gliimly application without web server
Use gg (see -r option) to send a request from command line, and receive reply from your services. This is useful in debugging issues and automating tests.
Issues in starting mgrg
mgrg starts your web application, running as service processes. If you're having issues with mgrg, check out its log. Assuming your application name is "app_name", the log file is:
/var/lib/gg/app_name/mgrglog/log

Web server error log
If you are using a web server as a reverse proxy, check its error log, which would store the error messages emitted. Typically, such files are in the following location:
/var/log/<web server>

(for example /var/log/apache2), but the location may vary - consult your web server's documentation.
Using gdb debugger
In order to use gdb debugger, you must make your application with "--debug" flag (see gg). Do not use "--debug" in any other case, because performance will be considerably affected.

Ultimately, you can attach a debugger to a running Gliimly process. If your application name is "app_name", first find the PID (process ID) of its process(es):
ps -ef|grep app_name.srvc

Note that you can control the number of worker processes started, and thus have only a single worker process (or the minimum necessary), which will make attaching to the process that actually processes a request easier (see gg).

Use gdb to load your program:
sudo gdb /var/lib/gg/bld/app_name/app_name.srvc

and then attach to the process (<PID> is the process ID you obtained above):
att <PID>

Once attached, you can break in the request you're debugging:
br <request name>

or in general Gliimly request dispatcher:
br gg_dispatch_request

which would handle any request to your application.

Note that by default, gdb will show Gliimly code and you can step through it as you've written it, which is easy to follow and understand.

However, if you wish to step through the underlying C libraries, use "--c-lines" option in gg when making your application. In addition, you can use "--plain-diag" option to see diagnostics for underlying C code alone. These options should be used only if you're trying to debug issues with Gliimly itself, or to find and report a bug in Gliimly.

Debugging version of Gliimly, as well as using "--debug" option to compile, will considerably slow down run-time performance (in some tests about 50-55%); do not use Gliimly build with debugging symbols (see install) nor "--debug" (see gg) in production.
See also
Debugging
debugging  
trace-run  
See all
documentation
 Decode base64

Purpose: Base64 decode.

 decode-base64 <data> to <output data> \
     [ input-length <input length> ]

decode-base64 will decode string <data> into <output data>, which can be binary string.

If "input-length" clause is used, then <input length> is the number of bytes decoded, otherwise the entirety of <data> is decoded.

The result is stored in <output data> (in "to" clause).

Note that the string to decode can have whitespaces before it (such as spaces or tabs), and whitespaces and new lines after it, which will all be ignored for the purpose of decoding.
Examples
See encode-base64.
See also
Base64
decode-base64  
encode-base64  
See all
documentation
 Decode hex

Purpose: Decode hexadecimal string into data.

 decode-hex <data> to <output> \
     [ input-length <input length> ]

decode-hex will decode hexadecimal string <data> to string <output> given in "to" clause.

<data> must consist of an even number of digits 0-9 and letters A-F or a-f. The length of <data> may be given by <input length> number in "input-length" clause, otherwise it is assumed to be the string length of <data>.
Examples
Get the original binary data from a hexadecimal string "hexdata". The output string "binout" is created:
 set-string hexdata = "0041000F414200"
 decode-hex hexdata to binout

The value of "binout" will be binary data equal to this C literal:
 "\x00""A""\x00""\xF""AB""\x00""\x04"

See also
Hex encoding
decode-hex  
encode-hex  
See all
documentation
 Decode url

Purpose: Decode URL-encoded string.

 decode-url <string> [ input-length <length> ] [ status <status> ]

decode-url will decode <string> (created by encode-url or other URL-encoding software) and store the result back into <string>. If you need <string> unchanged, make a copy of it first with copy-string. <length> in "input-length" clause specifies the number of bytes to decode; if omitted or negative, it is the string length of <string>.

All encoded values (starting with %) are decoded, and "+" (plus sign) is converted to space.

<status> number (in "status" clause) is GG_OKAY if all bytes decoded successfully, or in case of an error it is the index of the byte that could not be decoded (first byte is indexed "0"). If there is an error (for example hexadecimal value following % is invalid), the decoding stops and whatever was decoded up to that point is the result.
Examples
Decode URL-encoded string "str", after which it will hold a decoded string.
 decode-url str

See also
URL encoding
decode-url  
encode-url  
See all
documentation
 Decode web

Purpose: Decode web(HTML)-encoded string.

 decode-web <string> [ input-length <length> ]

decode-web will decode <string> (created by encode-web or other web-encoding software) and store the result back into it. If you need <string> unchanged, make a copy of it first with copy-string. To decode only a number of leading bytes in <string>, use "input-length" clause and specify <length>.

See encode-web.
Examples
Decode web-encoded string "str", after which it will hold a decoded string.
 decode-web str

See also
Web encoding
decode-web  
encode-web  
See all
documentation
 Decrypt data

Purpose: Decrypt data.

 decrypt-data <data> to <result> \
     [ input-length <input length> ] \
     [ binary [ <binary> ] ] \
     ( password <password> \
         [ salt <salt> [ salt-length <salt length> ] ] \
         [ iterations <iterations> ] \
         [ cipher <cipher algorithm> ] \
         [ digest <digest algorithm> ]
         [ cache ]
         [ clear-cache <clear cache> ) \
     [ init-vector <init vector> ]

decrypt-data will decrypt <data> which must have been encrypted with encrypt-data, or other software using the same algorithms and clauses as specified.

If "input-length" clause is not used, then the number of bytes decrypted is the length of <data> (see string-length); if "input-length" is specified, then exactly <input length> bytes are decrypted. Password used for decryption is string <password> (in "password" clause) and it must match the password used in encrypt-data. If "salt" clause is used, then string <salt> must match the salt used in encryption. If "init-vector" clause is used, then string <init vector> must match the IV (initialization vector) used in encryption. If "iterations" clause is used, then <iterations> must match the number used in encryption.

The result of decryption is in <result> (in "to" clause).

If data was encrypted in binary mode (see encrypt-data), you must decrypt it with the same, and if it wasn't, then you must not use it in decrypt-data either. The reason for this is obvious - binary mode of encryption is encrypted data in its shortest form, and character mode (without "binary" or if <binary> evaluates to false) is the same data converted to a hexadecimal string - thus decryption must first convert such data back to binary before decrypting.

The cipher and digest algorithms (if specified as <cipher algorithm> and <digest algorithm> in "cipher" and "digest" clauses respectively) must match what was used in encrypt-data.

"cache" clause is used to cache the result of key computation, so it is not computed each time decryption takes place, while "clear-cache" allows key to be re-computed every time <clear cache> evaluates to boolean true; re-computation of a key, if used, must match the usage during encryption. For more on "cache" and "clear-cache" clauses, as well as safety of encrypting/decrypting, see "Caching key" and "Safety" in encrypt-data.
Examples
See encrypt-data.
See also
Encryption
decrypt-data  
derive-key  
encrypt-data  
hash-string  
hmac-string  
random-crypto  
random-string  
See all
documentation
 Delete cookie

Purpose: Deletes a cookie.

 delete-cookie <cookie name> [ path <cookie path> ] [ status <status> ] [ secure <secure> ]

delete-cookie marks a cookie named <cookie name> for deletion, so it is sent back in the reply telling the client (such as browser) to delete it.

Newer client implementations require a cookie deletion to use a secure context if the cookie is considered secure, and it is recommended to use "secure" clause to delete such a cookie. This is the case when either "secure" clause is used without boolean variable <secure>, or if <secure> evaluates to true.

<cookie name> is a cookie that was either received from the client as a part of the request, or was added with set-cookie.

A cookie can be deleted before or after header is sent out (see out-header). However a cookie must be deleted prior to outputting any actual response (such as with output-statement or p-out for example), or the request will error out (see error-handling).

<status> (in "status" clause) is the number that will be GG_ERR_EXIST if the cookie did not exist, or 0 or greater if it did.

The same cookie name may be stored under different URL paths. You can use "path" clause to specify <cookie path> to ensure the desired cookie is deleted.
Examples
 delete-cookie "my_cookie"
 set-bool is_secure = true
 delete-cookie "my_cookie" path "/path" secure is_secure

See also
Cookies
delete-cookie  
get-cookie  
set-cookie  
See all
documentation
 Delete fifo

Purpose: Delete FIFO list elements up to the last one read, including.

 delete-fifo <list>

delete-fifo will delete all leading elements from the FIFO <list> up to the last one read, including. <list> was created by new-fifo.

Right after rewind-fifo, no element was read yet, and delete-fifo will have no effect. After any read-fifo, delete-fifo wil delete all elements up to the element read, including that element.
Examples
 new-fifo mylist

 // Add data to the list
 write-fifo mylist key "key1" value "value1"
 write-fifo mylist key "some2" value "other2"

 // Get first data from the list
 read-fifo mylist key k value v

 // Delete first element from the list, so list will have only "some2" key
 delete-fifo mylist

See also
FIFO
delete-fifo  
new-fifo  
purge-fifo  
read-fifo  
rewind-fifo  
write-fifo  
See all
documentation
 Delete file

Purpose: Deletes a file.

 delete-file <file location> [ status <status var> ]

File specified with <file location> is deleted. <file location> can be given with an absolute path or relative to the application home directory (see directories).

If "status" is specified, the status is stored into <status var>. The status is GG_OKAY on success or if the file did not exist, and GG_ERR_DELETE on failure.
Examples
 delete-file "/home/user/some_file" status st

See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 Delete index

Purpose: Delete a node from an index.

 delete-index <index> key <key> \
     [ status <status> ] \
     [ value <value> ] \

delete-index will search <index> for string <key> and if found, delete its node (including the key in it), set <value> (in "value" clause) to node's value, and set <status> number (in "status" clause) to GG_OKAY. If <key> is not found, <status> will be GG_ERR_EXIST. If <status> is not GG_OKAY, <value> is unchanged.
Examples
Delete node with key "123", and obtain its value:
 set-string k = "123"
 delete-index myindex key k value val status st
 if-true st not-equal GG_OKAY
    @Could not find key <<p-out k>>
    exit-handler
 end-if
 // display key/value deleted
 @Deleted key <<p-out k>> with value <<p-out val>>
 // delete the original value
 delete-string val

See also
Index
delete-index  
get-index  
new-index  
purge-index  
read-index  
use-cursor  
write-index  
See all
documentation
 Delete lifo

Purpose: Delete LIFO list elements.

 delete-lifo <list>

delete-lifo will delete the most recently added elements to the LIFO <list> up to the last one read, including. <list> was created by new-lifo.

Right after rewind-lifo, no element was read yet, and delete-lifo will have no effect. Note that write-lifo also performs an implicit rewind-lifo.

After any read-lifo, delete-lifo wil delete all elements up to the element read, including that element.
Examples
 new-lifo mylist

 // Add data to the list
 write-lifo mylist key "key1" value "value1"
 write-lifo mylist key "some2" value "other2"

 // Get first data from the list, it will be "some2" key
 read-lifo mylist key k value v

 // Delete first element from the list, so list will have only "key1" key
 delete-lifo mylist

See also
LIFO
delete-lifo  
new-lifo  
purge-lifo  
read-lifo  
rewind-lifo  
write-lifo  
See all
documentation
 Delete list

Purpose: Delete current linked list element.

 delete-list <list> [ status <status> ]

delete-list will delete current element in linked <list> created with new-list. A current list element is the one that will be subject to statements like read-list and write-list. See position-list for more details on the current element.

<status> (in "status" clause) is GG_OKAY if element is deleted, or GG_ERR_EXIST if could not delete element (this can happen if the list is empty, or if current list element is beyond the last element, such as for instance if "end" clause is used in position-list statement).

Once an element is deleted, the current element becomes either the next one (if the current element wasn't the last one), or the previous one (if the current element was the last one).
Examples
 delete-list mylist status st

See also
Linked list
delete-list  
get-list  
new-list  
position-list  
purge-list  
read-list  
write-list  
See all
documentation
 Delete string

Purpose: Free string memory.

 delete-string <string>

delete-string frees <string> variable previously allocated by a Gliimly statement.

Note that freeing memory is in most cases unnecessary as Gliimly will automatically do so at the end of each request. You should have a good reason for using delete-string otherwise.

Gliimly keeps count of <string> references. So if <string> is referenced by other Gliimly statements (for instance it was assigned to another string variable in set-string, or was used in statements like write-index or write-array), then <string> may not be deleted; in such a case, unless string was declared to be of process-scope and still used in such statements, it will be deleted when the request ends. Otherwise, <string> becomes an empty string ("") after it was deleted.
Examples
Allocate and free random string:
 random-string to ran_str
 ...
 delete-string ran_str

Free string allocated by write-string (consisting of 100 "Hello World"s):
 write-string ws
     start-loop repeat 100
         @Hello World
     end-loop
 end-write-string
 ...
 delete-string ws

See also
Strings
copy-string  
count-substring  
delete-string  
lower-string  
read-split  
replace-string  
set-string  
split-string  
string-length  
trim-string  
upper-string  
write-string  
See all
documentation
 Derive key

Purpose: Derive a key.

 derive-key <key> from <source> length <length> \
     [ binary [ <binary> ] ] \
     [ from-length <source length> ] \
     [ digest <digest algorithm> ] \
     [ salt <salt> [ salt-length <salt length> ] ] \
     [ iterations <iterations> ]

derive-key derives <key> from string <source> in "from" clause. If <source length> in "from-length" clause is specified, exactly <source length> bytes of <source> are used. Otherwise, the length of <source> string is used as the number of bytes (see string-length).

The desired length of derived key is given by <length> in "length" clause. The method for key generation is PBKDF2. By default the digest used is "SHA256". You can use a different <digest algorithm> in "digest" clause (for example "SHA3-256"). To see a list of available digests:
#get digests
openssl list -digest-algorithms

The salt for key derivation can be given with <salt> in "salt" clause. If "salt-length" clause is not specified, then the entire length of salt is used (see string-length), otherwise <salt length> bytes are used as salt.

The number of iterations is given by <iterations> in "iterations" clause. The default is 1000 per RFC 8018, though depending on your needs and the quality of <source> you may choose a different value.

By default, the derived key is produced in a hexadecimal form, where each byte is encoded as two-character hexadecimal characters, so its length is 2*<length>. If "binary" clause is used without boolean variable <binary>, or if <binary> evaluates to true, then the output is a binary string of <length> bytes.

Key derivation is often used when storing password-derivatives in the database (with salt), and also for symmetrical key generation.
Examples
Derived key is in variable "mk":
 random-string to rs9 length 16
 derive-key mk from "clave secreta" digest "sha-256" salt rs9 salt-length 10 iterations 2000 length 16

See also
Encryption
decrypt-data  
derive-key  
encrypt-data  
hash-string  
hmac-string  
random-crypto  
random-string  
See all
documentation
 Directories

Application directory structure
mgrg will create a Gliimly directory structure (see "-i" option) when you create your application. While you can keep and compile Gliimly source files in any directory, the directories used by Gliimly are always under /var/lib/gg directory.

A Gliimly application is always owned by a single Operating System user (see "-u" option in mgrg), while different applications can be owned by different users. This is the directory structure:
While Gliimly directories are fixed, you can effectively change their location by creating a soft link. This way, your directories and files can be elsewhere, even on a different disk. For example, to house your file storage on a different disk:
ln -s /home/disk0/file /var/lib/gg/<app name>/app/file

See also
General
about-gliim  
directories  
SELinux  
See all
documentation
 Documentation

Reference for Gliimly version 101
Note: All the topics below are available as a single-page documentation.

Man pages
Gliimly documentation is available online, and also as man pages (i.e. manual pages). You can take any of the topics above and type it in man, for example
man run-query

man how-gliim-works

The Gliimly section is '2gg', so in case of other software having conflicting topic names, you can also type
man 2gg run-query

man 2gg how-gliim-works


 Do once

Purpose: Execute statements only once in a process.

  do-once
     <any statements>
     ...
  end-do-once

do-once will execute <any statements> only once in a single process regardless of how many requests that process serves. <any statements> end with end-do-once. The first time a process reaches do-once, <any statements> will execute; in all subsequent cases the program control will skip to immediately after end-do-once.

do-once cannot be nested, but otherwise can be used any number of times.

Typical use of do-once may be making any calls that need to be performed only once per process, or it may be a one-time setup of process-scoped variables, or anything else that needs to execute just once for all requests served by the same process.

<any statements> execute in the nested scope relative to the code surrounding do-once/end-do-once, except that any process-scoped variables are created in the same scope as the code surrounding do-once/end-do-once; this simplifies creation of process-scoped variables, if needed.
Examples
In this example, a process-scoped array (that is available to multiple requests of a single process) is created in the very first request a process serves and data is written to it; the subsequent requests do not create a new array but rather just write to it.
 ...
  do-once
     new-array my_array hash-size 1024 process-scope
  end-do-once
  write-array my_array key my_key value my_data
  ...

See also
Program flow
break-loop  
code-blocks  
continue-loop  
do-once  
exit-handler  
if-defined  
if-true  
set-bool  
start-loop  
See all
documentation
 Encode base64

Purpose: Base64 encode.

 encode-base64 <data> to <output data> \
     [ input-length <input length> ]

encode-base64 will encode string <data> into base64 string <output data>.

If "input-length" clause is used, then <input length> is the number of bytes encoded, otherwise the entirety of <data> is encoded.

The result is stored in <output data> (in "to" clause).

Base64-encoded strings are often used where binary data needs to be in a format that complies with certain text-based protocols, such as in attaching documents in email, or embedding binary documents (such as "JPG" images for example) in web pages with "data:image/jpg;base64..." specified, etc.
Examples
An example that encodes a string, decodes, and finally checks if they match:
 // Original string, generally this would be binary data in most cases
 set-string dt ="  oh well  "

 // Encode in base64
 encode-base64 dt to out_dt

 decode-base64 out_dt to new_dt

 if-true dt equal new_dt
     @Success!
 else-if
     @Failure!
 end-if

In the next example, "input-length" clause is used, and only a partial of the input string is encoded, then later compared to the original:
 // Original string, generally this would be binary data in most cases
 set-string dt ="  oh well  "

 // Encode in base64, encode only 6 bytes
 encode-base64 dt input-length 6 to out_dt

 decode-base64 out_dt to new_dt

 // Get length of decoded string
 string-length new_dt to new_len

 if-true new_len not-equal 6
     @Failure!
 else-if
     @Success!
 end-if

 if-true dt equal new_dt length new_len
     @Success!
 else-if
     @Failure! [<<p-out dt>>] [<<p-out new_dt>>]
 end-if

See also
Base64
decode-base64  
encode-base64  
See all
documentation
 Encode hex

Purpose: Encode data into hexadecimal string.

 encode-hex <data> to <output> \
     [ input-length <input length> ] \
     [ prefix <prefix> ]

encode-hex will encode string <data> to hexadecimal string <output> given in "to" clause which consists of digits "0"-"9" and letters "a"-"f".

The length of <data> to encode may be given with <input length> number in "input-length" clause; if not the whole string <data> is used. If you wish to prefix the output with a string <prefix>, you can specify it in "prefix" clause with <prefix>; otherwise no prefix is prepended.
Examples
Create hexadecimal string from binary data "mydata" of length 7, prefixed with string "\\\\x" (which is typically needed for PostgreSQL binary input to queries). The output string "hexout" is created:
 set-string mydata = "\x00""A""\x00""\xF""AB""\x00""\x04"
 encode-hex mydata to hexout input-length 7 prefix "\\\\x"

The value of "hexout" will be:
 \\x0041000F414200

See also
Hex encoding
decode-hex  
encode-hex  
See all
documentation
 Encode url

Purpose: URL-encode string.

 encode-url <string> to <encoded string> \
     [ input-length <length> ]

encode-url URL-encodes <string> and stores the result in <encoded string>.

<length> in "input-length" clause lets you specify the number of bytes in <string> that will be encoded - if not specified or negative, it is the string length.

All bytes except alphanumeric and those from "-._~" (i.e. dash, dot, underscore and tilde) are encoded.
Examples
In this example, a string "str" is URL encoded and the result is in a "result" string variable:
 set-string str="  x=y?z&  "
 encode-url str to result

The "result" is "%20%20x%3Dy%3Fz%26%20%20".
See also
URL encoding
decode-url  
encode-url  
See all
documentation
 Encode web

Purpose: Web(HTML)-encode string.

 encode-web <string> to <encoded string> \
     [ input-length <length> ]

encode-web encodes <string> so it can be used in a HTML-like markup text (such as a web page or an XML/XHTML document), and stores the result in <encoded string>.

You can encode only the first <length> bytes, given by "input-length" clause.
Examples
In this example, a string "str" will be web-encoded and the result is in "result" variable:
 set-string str="  x<y>z&\"'  "
 encode-web str to result

The "result" is "   x&lt;y&gt;z&amp;&quot;&apos;  ".
See also
Web encoding
decode-web  
encode-web  
See all
documentation
 Encrypt data

Purpose: Encrypt data.

 encrypt-data <data> to <result> \
     [ input-length <input length> ] \
     [ binary [ <binary> ] ] \
     ( password <password> \
         [ salt <salt> [ salt-length <salt length> ] ] \
         [ iterations <iterations> ] \
         [ cipher <cipher algorithm> ] \
         [ digest <digest algorithm> ]
         [ cache ]
         [ clear-cache <clear cache> ) \
     [ init-vector <init vector> ]

encrypt-data encrypts <data> and stores the ciphertext to <result> specified by "to" clause.
Cipher and digest
By default, AES-256-CBC encryption and SHA256 hashing is used. You can however specify different cipher and digest algorithms with <cipher algorithm> (in "cipher" clause) and <digest algorithm> (in "digest" clause) as long as OpenSSL supports them, or you have added them to OpenSSL. You can see the available ones by using:
#get list of cipher providers
openssl list -cipher-algorithms

#get list of digest providers
openssl list -digest-algorithms

Note that the default algorithms will typically suffice. If you use different algorithms, you should have a specific reason. If you use a specific cipher and digest for encoding, you must use the same for decoding. The key derivation method is PBKDF2.
Data to be encrypted
If "input-length" clause is missing, then the number of bytes encrypted is the length of <data> (see string-length). If "input-length" clause is used, then <input length> bytes are encrypted.
Password
String <password> (in "password" clause) is the password used to encrypt and it must be a null-terminated string.
Salt
String <salt> (in "salt" clause) is the salt used in Key Derivation Function (KDF) when an actual symmetric encryption key is created. If <salt length> (in "salt-length" clause) is not specified, then the salt is null-terminated, otherwise it is a binary value of length <salt length>. See random-string or random-crypto for generating a random salt. If you use the "salt" clause, then you must use the exact same <salt> when data is decrypted with decrypt-data - typically salt values are stored or transmitted unencrypted.
Iterations
The number of iterations used in producing a key is specified in <iterations> in "iterations" clause. The default is 1000 per RFC 8018, though depending on your needs and the quality of password you may choose a different value.
Initialization vector (IV)
Different encrypted messages should have a different IV value, which is specified with <init vector> in the "init-vector" clause. See random-string or random-crypto for generating IV values. The decrypting side must use the same IV value to decrypt the message. Just like salt, IV is not a secret and is transmitted in plain text. Each cipher algorithm may require a certain number of bytes for IV.
Encrypted data
The encrypted data is stored in <result> (in "to" clause). The encrypted data can be a binary data (if "binary" clause is present without boolean variable <binary>, or if <binary> evaluates to true), which is binary-mode encryption; or if not, it will be a null-terminated string, which is character-mode encryption, consisting of hexadecimal characters (i.e. ranging from "0" to "9" and "a" to "f"). Character mode of encryption is convenient if the result of encryption should be a human readable string, or for the purposes of non-binary storage in the database.
Caching key
A key used to actually encrypt/decrypt data is produced by using password, salt, cipher, digest and the number of iterations. Depending on these parameters (especially the number of iterations), computing the key can be a resource intensive and lengthy operation. You can cache the key value and compute it only once (or once in a while) by using "cache" clause. If you need to recompute the key once in a while, use "clear-cache" clause. <clear cache> is a "bool" variable; the key cache is cleared if it is true, and stays if it is false. For example with encrypt-data (the same applies to decrypt-data):
 set-bool clear = true if-true q equal 0
 encrypt-data dt init-vector non password pwd \
     salt rs salt-length 10 iterations iter to \
     dt_enc cache clear-cache clear

In this case, when "q" is 0, cache will be cleared, with values of password, salt and iterations presumably changed, and the new key is computed and then cached. In all other cases, the last computed key stays the same. Normally, with IV usage (in "init-vector" clause), there is no need to change the key often, or at all.

Note that while "cache" clause is in effect, the values for "password", "salt", "cipher", "digest" and "iterations" clauses can change without any effect. Only when "clear-cache" evaluates to "true" are those values taken into account.
Safety
Unless you are encrypting/decrypting a single message, you should always use IV in "init-vector" clause. Its purpose is to randomize the data encrypted, so that same messages do not produce the same ciphertext.

If you use salt, a random IV is created with each different salt value. However, different salt values without "cache" clause will regenerate the key, which may be computationally intensive, so it may be better to use a different IV instead for each new encryption and keep the salt value the same with the high number of iterations. In practicality this means using "cache" so that key is computed once per process with the salt, and IV changes with each message. If you need to recompute the key occasionally, use "clear-cache".

Each cipher/digest combination carries separate recommendations about the usage of salt, IV and the number of iterations. Please consult their documentation for more details.
Examples
In the following example, the data is encrypted, and then decrypted, producing the very same data:
 // Original string to encrypt
 set-string orig_data="something to encrypt!"

 // Encrypted data is in "res" variable
 encrypt-data orig_data password "mypass" to res

 // Decrypt what was just encrypted, decrypted data is in "dec_data"
 decrypt-data res password "mypass" to dec_data

 // Check that decrypted data matches the original 
 if (!strcmp (orig_data, dec_data)) {
     @Success!
 } else {
     @Failure!
 }

A more involved example below encrypts specific number of bytes (6 in this case). random-string is used to produce salt. The length of data to encrypt is given with "input-length" clause. The encrypted data is specified to be "binary" (meaning not as a human-readable string), so the "output-length" of such binary output is specified. The decryption thus uses "input-length" clause to specify the length of data to decrypt, and also "output-length" to get the length of decrypted data. Finally, the original data is compared with the decrypted data, and the length of such data must be the same as the original (meaning 6):
 // Original data (only the first 6 bytes are encrypted)
 set-string orig_data="something to encrypt!"

 // Get 8 random binary bytes to be the salt
 random-string to newsalt length 8 binary

 // Encrypt data using salt and produce binary output (meaning it's not a null-terminated character string), with the
 // length of such output in "encrypted_len" variable.
 encrypt-data orig_data input-length 6 output-length encrypted_len password "mypass" salt newsalt to res binary

 // Decrypt the data encrypted above. The length of encrypted data is passed in "encrypted_len" variable, and then length of decrypted data
 // is obtained in "decrypted_len" variable.
 decrypt-data res output-length decrypted_len password "mypass" salt newsalt to dec_data input-length encrypted_len binary

 // Check if the 6 bytes of the original data matches decrypted data, and if exactly 6 bytes was decrypted
 if (!strncmp(orig_data,dec_data, 6) && decrypted_len == 6) {
     @Success!
 } else {
     @Failure!
 }

An example of using different algorithms:
 encrypt-data "some data!" password "mypwd" salt rs1 to encd1 cipher "camellia-256-cfb1" digest "sha3-256"
 decrypt-data encd1 password "mypwd" salt rs1 to decd1 cipher "camellia-256-cfb1" digest "sha3-256"

See also
Encryption
decrypt-data  
derive-key  
encrypt-data  
hash-string  
hmac-string  
random-crypto  
random-string  
See all
documentation
 Error code

Many Gliimly statements return status with GG_ERR_... error codes, which are generally descriptive to a point. Such status may not as detailed as the operating system "errno" variable, however you can use "errno" clause in get-req statement to obtain the last known errno value from aforementioned statements. You should obtain this value as soon as possible after the statement because another statement may set it afterwards.

In the following example, a directory is attempted to be deleted via delete-file, which will fail with GG_ERR_DELETE - however you can get a more specific code via "errno" (which in this case is "21", or "EISDIR", which means that it cannot delete a directory with this statement):
 delete-file "some_directory" status stc
 if-true stc equal GG_ERR_DELETE
     get-req errno to e
     @Cannot delete file
     pf-out "Error %ld\n", e
 end-if

Note that with some GG_ERR_... codes, the "errno" clause in get-req may return 0. This means the error was detected by Gliimly and not reported by the operating system.
See also
Error handling
db-error  
error-code  
error-handling  
report-error  
See all
documentation
 Error handling

When your program errors out
"Erroring out" means a process handling a request has encountered a difficulty that cannot be handled and it will either:
.
Note that if your program is command-line, it will exit in any case since it handles a single request anyway.
When there is a problem in Gliimly
If there is a fatal internal error (i.e. error in Gliimly code itself that cannot be handled), it will be caught by Gliimly, and the process will end. If your process is started with mgrg, it may be automatically restarted.
Logging the error
Regardless of the type of error and regardless of whether the process exits or not, the error is logged and the program stack with full source code lines (see gg for including debug information) will be written to backtrace file (use -e option of gg to obtain its location). Note that the program stack is logged only if Gliimly is built in debugging mode (see "DI=1" option when building Gliimly); otherwise, production code may be slowed down by stack dumping.

You can see the list of last N errors (and the location of file containing backtrace for them) by using gg, for instance to see the last 3 errors:
gg -e 3

See also
Error handling
db-error  
error-code  
error-handling  
report-error  
See all
documentation
 Exec program

Purpose: Execute a program.

 exec-program <program path> \
     [ args <program arg> [ , ... ] ] \
     [ status <exit status> ] \
     [ ( input <input string> [ input-length <string length> ] ) \
         | ( input-file <input file> ) ] \
     [ ( output <output string>  ) \
         | ( output-file <output file> ) ] \
     [ ( error <error string> ) | ( error-file <error file> ) ]

exec-program executes a program specified in <program path>, which can be a program name without path that exists in the path specified by the PATH environment variable; or an absolute path; or a path relative to the application home directory (see directories).

A program can have input arguments (specified as strings with "args" clause), and if there are more than one, they must be separated by a comma. There is no limit on the number of input arguments, other than of the underlying Operating System.

You can specify a status variable <exit status> - this variable will have program's exit status. Note that if the program was terminated by a signal, <exit status> will have a value of 128+signal_number, so for example if the program was terminated with signal 9 (i.e. KILL signal), <exit status> will be 137 (i.e. 128+9). Any other kind of abnormal program termination (i.e. program termination where program did not set the exit code) will return 126 as <exit code>.

Specifying program input and output is optional. If program has output and you are not capturing it in any way, the output is redirected to a temporary file that is deleted after exec-program completes.

You can specify an <input string> to be passed to program's standard input (stdin) via "input" clause. If "input-length" is not used, the length of this input is the string length of <input string>, otherwise <string length> bytes is passed to the program. Alternatively, you can specify a file <input file> (via "input-file" clause) to be opened and directed into program's standard input (stdin).

You can redirect the program's output (which is "stdout") to a file <output file> using "output-file" clause. Alternatively, program's output can be captured in <output string> (via "output" clause).

To get the program's error output (which is "stderr") to file <error file>, use "error-file" clause. Alternatively, program's error output can be captured in <error string> (via "error" clause).

If <input file> cannot be opened, GG_ERR_READ is reported in <exit status>, and if either <output file> or <error file> cannot be opened, the status is GG_ERR_WRITE.
Examples
To simply execute a program that is in the path, without any arguments, input or output:
 exec-program "myprogram"

Run "grep" program using a string as its standard input in order to remove a line that contains "bad line" in it, and outputting the result into "ovar" variable:
 exec-program "grep" args "-v", "bad line" "config" input "line 1\nline 2\nbad line\nline 3" output ovar
 p-out ovar

Get the list of files in the application home directory into buffer "ovar" and then display it:
 exec-program "ls" output ovar

Similar to the above example of listing files, but output results to a file (which is then read and the result displayed), and provide options "-a -l -s" to "ls" program:
 exec-program "ls" args "-a", "-l", "-s" output-file "lsout"
 read-file "lsout" to final_res
 p-out final_res

Count the lines of file "config" (which is redirected to the standard output of "wc" program) and store the result in variable "ovar" (by means of redirecting the output of "wc" program to it), and then display:
 exec-program "wc" args "-l" input-file "config" output ovar
 p-out ovar

See also
Program execution
exec-program  
handler-status  
See all
documentation
 Exit handler

Purpose: Exit current request processing.

 exit-handler [ <request status> ]

Exits current request by transferring control directly after the top-level request dispatcher. If there is an after-handler, it will still execute, unless exit-handler is called from before-handler.

<request status> number is a request status returned to the caller (see handler-status); if not specified, then it's the value specified in the last executed handler-status statement; if none executed, then it's 0.
Examples
Returning status of 20:
 begin-handler /req-handler public
     ...
     handler-status 20
     ...
     exit-handler
     ...
 end-handler

Returning status of 0:
 begin-handler /req-handler public
     ...
     exit-handler
     ...
 end-handler

Returning status of 10:
 begin-handler /req-handler public
     ...
     exit-handler 10
     ...
 end-handler

See also
Program flow
break-loop  
code-blocks  
continue-loop  
do-once  
exit-handler  
if-defined  
if-true  
set-bool  
start-loop  
See all
documentation
 Extended mode

Purpose: Use external libraries or C code with Gliimly.

 extended-mode

extended-mode, when specified as a very first statement in .gliim source file, allows for use of call-extended statement.
Examples
 extended-mode

 begin-handler /my-handler public
     ...
     call-extended factorial (10, &fact)
     ...
 end-handler

See also
Safety
call-extended  
extended-mode  
See all
documentation
 File position

Purpose: Set a position or get a current position for an open file.

 file-position file-id <file id> \
     ( set <position> ) | ( get <position> ) \
     [ status <status> ]

file-position will set or get position for a file opened with open-file, where <file id> is an open file identifier.

If "set" clause is used, file position is set to <position> (with 0 being the first byte).

If "get" clause is used, file position is obtained in <position> (with 0 being the first byte).

<status> number in "status" clause will be GG_OKAY if set/get succeeded, GG_ERR_OPEN if file not open, or GG_ERR_POSITION if not.

Note that setting position past the last byte of file is okay for writing - in this case the bytes between the end of file and the <position> are filled with null-bytes when the write operation occurs.
Examples
Open file "testwrite" and set file byte position to 100, then obtain it later:
 open-file "testwrite" file-id nf
 file-position file-id nf set 100
 ...
 file-position file-id nf get pos

See also open-file.
See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 File storage

Gliimly provides a file directory that you can use for any general purpose, including for storing temporary-files. This directory is also used for automatic upload of files from clients. It provides a two-tier directory system, with sub-directories automatically created to spread the files for faster access.

Files in Gliimly file directory are located in the sub-directories of:
/var/lib/gg/<app_name>/app/file

If you wish to place this directory in another physical location, see gg.

You can create files here by means of uniq-file, where a new unique file is created - with the file name made available to you. This directory is also used for uploading of files from clients (such as web browsers or mobile device cameras) - the uploading is handled automatically by Gliimly (see get-param).

In general, a number of sub-directories is created within the file directory that contain files, with (currently) the maximum of 40,000 directories with unlimited files per each directory (so if you had 50,000 files per each directory, then you could store a total of about 2 billion files). This scheme also allows for faster access to file nodes due to relatively low number of sub-directories and files randomly spread per each such sub-directory. The random spreading of files across subdirectories is done automatically.

Do not rename file names or sub-directory names stored under file directory.
See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 File uploading

Purpose: Upload a file to server.

 get-param <name>

Files uploaded via client (such as a browser, curl etc.) are obtained via get-param.

Gliimly uploads files via HTML automatically, meaning you do not have to write any code for that specific purpose. An uploaded file will be stored in file-storage, with path and name of such a file generated by Gliimly to be unique. For example, a file uploaded might be named "/var/lib/gg/app_name/app//file/d0/f31881". When file is uploaded, the following input parameters can be obtained, in this case assuming "name" attribute of HTML "input" element is "myfile":

For example, for an HTML form which is uploading a file named "myfile", such as
<input type='file' name='myfile'>

your code that handles this might be:
 get-param myfile_filename
 get-param myfile_location
 get-param myfile_ext
 get-param myfile_size

 You have uploaded file <<p-web myfile_filename>> to a server file at <<p-web myfile_location>>

See also
Files
close-file  
copy-file  
delete-file  
file-position  
file-storage  
file-uploading  
lock-file  
open-file  
read-file  
read-line  
rename-file  
stat-file  
temporary-file  
uniq-file  
unlock-file  
write-file  
See all
documentation
 Finish output

Purpose: Finish the output.

 finish-output

finish-output will flush out and conclude all output (see output-statement). Any such output afterwards will silently fail to do so. As far as the client is concerned, all the output is complete.

This statement is useful when you need to continue work after the output is complete. For example, if the task performed is a long-running one, you can inform the client that the job has started, and then take any amount of time to actually complete the job, without worrying about client timeouts. The client can inquire about the job status via a different request, or be informed via email etc.
Examples
 finish-output

See also
Output
finish-output  
flush-output  
output-statement  
pf-out  
pf-url  
pf-web  
p-num  
p-out  
p-path  
p-source-file  
p-source-line  
p-url  
p-web  
See all
documentation
 Flush output

Purpose: Flush output.

 flush-output

Use flush-output statement to flush any pending output.

This can be useful if the complete output would take longer to produce and intermittent partial output would be needed.
Examples
In this case the complete output may take at least 20 seconds. With flush-output, the message "This is partial output" will be flushed out immediately.
 @This is partial output
 flush-output
 sleep(20);
 @This is final output

See also
Output
finish-output  
flush-output  
output-statement  
pf-out  
pf-url  
pf-web  
p-num  
p-out  
p-path  
p-source-file  
p-source-line  
p-url  
p-web  
See all
documentation
 Get app

Purpose: Obtain data that describes the application.

 get-app \
     name | directory | trace-directory | file-directory \
         | db-vendor <database configuration> | upload-size \
         | path | is-service \
     to <variable>

Application-related variables can be obtained with get-app statement. The following application variables can be obtained (they are all strings unless indicated otherwise):
Examples
Get the name of Gliimly application:
 get-app name to appname

Get the vendor of database db:
 get-app db-vendor db to dbv
 if-true dbv equal GG_POSTGRES
     // do something Postgres specific
 end-if

See also
Application information
get-app  
See all
documentation
 Get array

Purpose: Get usage specifics for an array.

 get-array  <array > \
     ( length <length> ) \
     | ( hash-size <hash size> ) \
     | ( average-reads <reads> )

get-array provides usage specifics of an <array> (created by new-array).

Use "length" clause to obtain its <length> (i.e. the number of elements stored in it), "hash-size" clause to obtain its <hash size> (i.e. the number of "buckets", or possible array codes in the underlying hash table).

"average-reads" clause will obtain in <reads> the average number of array-reads (i.e. how many string comparisons are needed on average to find a key) multiplied by 100 (so if an average number of reads was 1.5, it will be 150).

This information may be useful in determining the performance of a array, and whether resize-array is indicated.
Examples
 get-array h length l hash-size s average-reads r

See also
Array
get-array  
new-array  
purge-array  
read-array  
resize-array  
write-array  
See all
documentation
 Get cookie

Purpose: Get cookie value.

 get-cookie ( <cookie value> = <cookie name> ) ,...

get-cookie obtains string <cookie value> of a cookie with the name given by string <cookie name>. A cookie would be obtained via incoming request from the client (such as web browser) or it would be set using set-cookie.

The value of cookie is stored in <cookie value>.  

Cookies are often used to persist user data on the client, such as for maintaining session security or for convenience of identifying the user etc.

You can obtain multiple cookies separated by a comma:
 get-cookie c = "mycookie", c1 = "mycookie1", c2="mycookie2"

Examples
Get value of cookie named "my_cookie_name" - variable my_cookie_value will hold its value:
 get-cookie my_cookie_value="my_cookie_name"

See also
Cookies
delete-cookie  
get-cookie  
set-cookie  
See all
documentation
 Get index

Purpose: Get information about an index.

 get-index <index> \
     ( count <count> ) | ( hops <hops> )

get-index provides information about <index> (created by new-index):