<hops> (in "hops" clause) provides the number of nodes accessed to find a key in the last index statement executed prior to get-index. Note that <hops> is available only in debug Gliimly build; otherwise it is always zero.
Get the number of nodes (keys) in a index:
new-index myindex
...
get-index myindex count c
Copied!
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
Get list
Purpose: Get information about linked list.
get-list <list> count <count>
Copied!
get-list will obtain information about linked <list> created with new-list.
<count> (in "count" clause) is the number of elements in the list.
Get the number of elements in list "mylist" in number "size":
get-list mylist count size
Copied!
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
Get message
Purpose: Get string produced by writing a message.
get-message <message> to <string>
Copied!
get-message will create a <string> from <message> which must have been created with new-message. <string> can then be used elsewhere, for instance sent with a remote call (see run-remote), written to a file etc.
Once get-message is called, <message> is initialized as if it was just created with new-message without the "from" clause.
new-message msg
write-message msg key "key1" value "value1"
get-message msg to str
...
new-message new from str
read-message new key k value v
pf-out "[%s] [%s]\n", k,v
Copied!
The result is:
[key1] [value1]
Copied!
Messages
get-message
new-message
read-message
SEMI
write-message
See all
documentation
Get param
Purpose: Get a parameter value.
get-param ( <name> [ type <type> ] ) , ...
Copied!
get-param stores a parameter value in variable <name>. A parameter is a name/value pair kept by Gliimly for each request. The parameter's name must match <name>. A parameter can be of any type. A parameter is set either:
If parameter is a string, it is trimmed for whitespaces (both on left and right). You can specify any number of parameters, separated by a comma.
By default, <name> is a string variable, unless <type> (in "type" clause) is specified. <type> can be "string" for a string variable (the default), "bool" for a boolean variable, "number" for a number variable, "message" for a message variable, "split-string" for a split-string variable, "array" for an array variable, "index" for an index variable, "index-cursor" for an index cursor variable, "fifo" for a FIFO variable, "lifo" for a LIFO variable, "list" for a list variable, "file" for a file variable, and "service" for a service variable.
The value obtained with get-param is checked to be of the proper <type>, and if it isn't, your request will error out. The exception to this is that a string parameter can be converted into a number or a boolean, assuming the string value represents a valid number or is "true"/"false". Parameters of "number" and "bool" types are obtained by value, and others by reference. It means for instance, that you can pass an index to call-handler and read and write nodes there, and such changes will be visible in the caller request.
Input parameters from a caller
Input parameters from an outside caller are specified as name/value pairs (see service or command-line). Input parameter name can be made up of alphanumeric characters, hyphen or underscore only and cannot start with a digit. Note that a hyphen is automatically converted to underscore, so for instance an input parameter "some-parameter" in HTTP request will be "some_parameter" in get-param.
- File uploads
File uploads are handled as input parameters as well, see file-uploading.
- Web input parameters
As an example, for HTML form input parameters named "param1" with value "value1" and "param2" with value "value2":
<input type='hidden' name='param1' value='value1'>
<input type='hidden' name='param2' value='value2'>
Copied!
you can get these parameters and print out their values by using:
get-param param1, param2
Copied!
A request may be in the form of a web link URL, and getting the parameter values is the same:
http://<your web server>/<app name>/<request name>¶m1=value1¶m2=value2
Copied!
Setting parameters during request's execution
Use set-param to replace the value of an existing parameter, or create a new one. For instance:
get-param par1
...
set-param par1="new value"
Copied!
In this case the value of an existing parameter "par1" is replaced with "new value". In the following code a new parameter is created, which can be retrieved later with get-param:
set-param par1="new value"
get-param par1
Copied!
See call-handler for more examples.
Duplicate input parameter names
If there are multiple input parameters with the same name set by the request caller, such as
http://<web address>/<app name>/<request name>?par=val1&par=val2
Copied!
the value of input parameter "par" is undefined. Do not specify multiple input parameters with the same name.
Request data
get-param
request-body
set-param
See all
documentation
Get req
Purpose: Obtain data that describes the input request.
get-req \
errno | error | cookie-count | cookie <cookie index> \
| arg-count | arg-value <arg index> \
| header <header> | referring-url | method \
| content-type | trace-file | process-id | name \
to <variable>
Copied!
Information related to an input request can be obtained with get-req statement and the result stored into <variable> (in "to" clause). The following can be obtained (all are strings except where stated otherwise):
- "errno" obtains the integer value of operating system "errno" tied to a last Gliimly statement that can return a status code. The "errno" value is saved internally; and restored here. It means that if "errno" changed for whatever reason since such Gliimly statement (such as with call-extended), you will still obtain the correct value. See error-code for an example. Note that errno value is undefined if there is no error, and can be 0 if the error is reported by Gliimly and not by operating system.
- "error" returns the error message that correlates to "errno" value.
- "cookie-count" returns the number of cookies. This means any cookies received from the client plus any cookies added (with set-cookie) in the application minus any cookies deleted (with delete-cookie).
- "cookie" returns the cookie value specified by <cookie index> (a sequential number starting with 1 up to the number of cookies), for instance:
get-req cookie-count to cookie_c
start-loop repeat cookie_c use i
get-req cookie i to cookie_val
pf-web "cookie %s\n", cookie_val
@<br/>
end-loop
Copied!
In this example, we get the number of cookies, and then print out each cookie value.
- "arg-count" is the number of input arguments to your application (passed from a program caller, see "-a" option in mgrg and "--arg" in gg).
- "arg-value" is the string value of a single element from the array of input arguments, specified by <arg_index>. This array is indexed from 1 to the value obtained by "arg-count". Here is an example of using arg-count and arg-value:
get-req arg-count to ac
pf-out "Total args [%ld]", ac
start-loop repeat ac use i
get-req arg-value i to av
pf-out "%s\n", av
end-loop
Copied!
This code will display the number of input arguments (as passed to main() function of your program, excluding the first argument which is the name of the program), and then all the arguments. If there are no arguments, then variable 'ac' would be 0.
- "header" is the value of HTTP request header <header> that is set by the client. For example, if the HTTP request contains header "My-Header:30", then hval would be "30":
get-req header "My-Header" to hval
Copied!
Note that not all HTTP request headers are set by the caller. For example, SERVER_NAME or QUERY_STRING are set by the web server, and to get such headers, use get-sys.
- "method" is the request method. This is a number with values of GG_GET, GG_POST, GG_PUT, GG_PATCH or GG_DELETE for GET, POST, PUT, PATCH or DELETE requests, respectively. If it is not any of those commonly used ones, then the value is GG_OTHER and you can use get-sys with "environment" clause to obtain "REQUEST_METHOD" variable.
- "content-type" is the request content type. It is a string and generally denotes the content type of a request-body, if included in the request. Common examples are "application/x-www-form-urlencoded", "multipart/form-data" or "application/json".
- "referring-url" is the referring URL (i.e. the page from which this request was called, for example by clicking a link).
- "trace-file" is the full path of the trace file for this request (if enabled, see trace-run).
- "process-id" is the "PID" (process ID) number of the currently executing process, as a number.
- "name" is the request name as specified in the request URL.
Get the name of current trace file:
get-req trace-file to trace_file
Copied!
Request information
get-req
See all
documentation
Get sys
Purpose: Obtain data that describes the system.
get-sys \
environment <var name> \
directory | os-name | os-version \
to <variable>
Copied!
System-describing variables can be obtained with get-sys statement and the result stored into <variable>. The following system variables can be obtained:
- "environment" returns the name of a given environment variable <var name>. If this is a server program, then the environment passed from a remote caller (such as web proxy) is queried. If this is a command-line program, then the environment from the Operating System is queried. In the following example,the QUERY_STRING variable (i.e. the actual query string from URL) is obtained:
get-sys environment "QUERY_STRING" to qstr
Copied!
- "directory" is the execution directory of the command-line program, i.e. the current working directory when the program was executed. Note that Gliimly will change the current working directory immediately afterwards to the application home directory (see directories). You can use this clause to work with files in the directory where the program was started. If your program runs as a service, then "directory" clause always returns application home directory, regardless of which directory mgrg program manager started your application from.
- "os-name" is the name of Operating System.
- "os-version" is the version of Operating System.
Get the name of the Operating System
get-sys os-name to os_name
Copied!
System information
get-sys
See all
documentation
Get time
Purpose: Get time.
get-time to <time var> \
[ timezone <tz> ] \
[ year <year> ] \
[ month <month> ] \
[ day <day> ] \
[ hour <hour> ] \
[ minute <minute> ] \
[ second <second> ] \
[ format <format> ]
Copied!
get-time produces <time var> variable that contains string with time. <time var> is allocated memory.
If none of "year", "month", "day", "hour", "minute" or "second" clauses are used, then current time is produced.
Use timezone to specify that time produced will be in timezone <tz>. For example if <tz> is "EST", that means Eastern Standard Time, while "MST" means Mountain Standard Time. The exact way to get a full list of timezones recognized on your system may vary, but on many systems you can use:
timedatectl list-timezones
Copied!
So for example to get the time in Phoenix, Arizona you could use "America/Phoenix" for <tz>. If timezone clause is omitted, then time is produced in "GMT" timezone by default. DST (Daylight Savings Time) is automatically adjusted.
Each variable specified with "year", "month", "day", "hour", "minute" or "second" is a time to be added or subtracted to/from current date. For example "year 2" means add 2 years to the current date, and "year -4" means subtract 4 years, whereas "hour -4" means subtract 4 hours, etc. So for example, a moment in time that is 2 years into the future minus 5 days minus 1 hour is:
get-time to time_var year 2 day -5 hour -1
Copied!
<format> allows you to get the time in any string format you like, using the specifiers available in C "strftime". For example, if <format> is "%A, %B %d %Y, %l:%M %p %Z", it will produce something like "Sunday, November 28 2021, 9:07 PM MST". The default format is "UTC/GMT" format, which for instance, is suitable for use with cookie timestamps, and looks something like "Mon, 16 Jul 2012 00:03:01 GMT".
To get current time in "GMT" timezone, in a format that is suitable for use with set-cookie (for example to set expiration date):
get-time to mytime
Copied!
To get the time in the same format, only 1 year and 2 months in the future:
get-time to mytime year 1 month 2
Copied!
An example of a future date (1 year, 3 months, 4 days, 7 hours, 15 minutes and 22 seconds into the future), in a specific format (see "strftime"):
get-time to time_var timezone "MST" year 1 month 3 day 4 hour 7 minute 15 second 22 format "%A, %B %d %Y, %l:%M %p %Z"
Copied!
Time
get-time
pause-program
See all
documentation
Gg
Purpose: Gliimly general purpose utility: build, test, run, miscellaneous (pronounced "gigi").
gg <options>
Copied!
- -q Build Gliimly application from source code in the current directory. mgrg must run first in this directory with "-i" option to create the application. You must have at least one Gliimly source file (.gliim file), with each such file implementing a single request handler. All application source files must be contained in a flat directory; however, each request handler can handle any hierarchical path, so your API can be fully hierarchical.
The following options can be used when building an application:
- --db="<database vendor>:<db config file> ..."
Specify a list of databases used in your application. Each element of the list is <database vendor> (which is 'mariadb', 'postgres' or 'sqlite'), followed by a colon (:) and then <db config file>, where <db config file> is used to refer to a database in statements such as run-query.
Each <database vendor>:<db config file> is separated by a space. You can list any number of databases for use in your application. A file in current directory with name <db config file> must exist and contain the connection parameters for database access, and is copied to Gliimly's database configuration directory (see directories). See database-config-file for more details on the content of this file.
- --lflag=<linker flags>
If you wish to add any additional linker flags (such as any non-Gliimly libraries), specify them quoted under this option.
- --cflag=<C flags>
If you wish to add any additional C compiler (gcc) flags, specify them quoted under this option.
- --trace
If specified, tracing information code will be generated (without it, tracing is not available and trace-run statement is ignored). Tracing only works when debugging mode is enabled, so --debug option must be used as well.
- --path=<application path>
This option lets you specify the application path for your request URLs. It is a leading path of a URL prior to request name and any parameters. If empty, the default is the application name preceded by a forward slash:
/<app name>
Copied!
- --maxupload=<max upload size>
Specify maximum upload size for a file (in bytes). The default is approximately 25MB.
- --max-errors=<max errors>
During building of an application, emit a maximum of <max errors> diagnostic messages per .gliim source file. The default is 5.
- --debug
Generate debugging information when compiling your application. Debugging information is required to produce a backtrace file with the stack that contains source code line numbers, in order to pinpoint the exact location where report-error statement was used, or where the application crashed. It is also needed to use gdb for debugging purposes. Note that stack information is produced only when Gliimly is built in debugging mode (see "DI=1" option when installing Gliimly).
- --c-lines
Skip generating line information when compiling .gliim files. By default line information is included, which allows errors to be reported with line numbers in .gliim files. If you want only generated C code line numbers to be used, use this option. This output will omit certain color-coded and other details that are normally present without this option.
- --optimize-memory
Use memory-optimizing garbage collection, which counts memory references and frees memory as soon as possible. Default is without memory optimization, which frees memory at the end of the request. Do not use this option unless your system is seriously starved for memory because it imposes a performance penatly (in some tests 15-25%).
- --public
Change the default behavior of request handler safety so that request handlers without "public" or "private" clause are by default "public"; see begin-handler for more details.
- --single-file
A request handler is written in a source file whose path matches fully or partially that of the request, and such a file can contain any number of request handlers that match, see request. If, however "--single-file" is used, each request has to be in its own file whose path matches fully the request path, and no other request can be implemented in such file. For example, with "--single-file", request "/myreq" has to be in file "myreq.gliim" in the source directory, while request "/other/newreq" has to be in file "other/newreq.gliim" (meaning in file "newreq.gliim" in subdirectory "other" in the source directory).
- --exclude-dir
By default, all ".gliim" files (including in all subdirectories regardless of how many levels there may be), are picked up for compilation. If "--exclude-dir" is used, then you can specify any number of subdirectories, separated by commas, to be excluded.
- --parallel=<threads>
Use <threads> number of threads to compile application. By default, the number of threads is equal to the number of CPUs (including virtual), allowing each CPU to compile one source files at a time; this is usually the fastest way. You can serialize compilation with "--parallel=1"; or you can set <threads> to any number between 1 and three time the number of CPUs in order to reach your performance and CPU utilization goals.
- --posix-regex
Use ERE (Extended Regular Expression) POSIX regex library built into Linux instead of default PCRE2, see match-regex. While the two are largely compatibile, you can use either one depending on your needs.
- --plain-diag
Do not use color-coded and more detailed Gliimly diagnostic output. While rare, you may need this option in cases when there may be a Gliimly or underlying compiler bug, or for some other reason.
- -c,--clean
Clean all object and other intermediate files, so that consequent application build is a full recompilation. Use it alone and prior to rebuilding the application.
Note that when any gg compilation options change, the application is rebuilt (i.e. the change has the effect of "--clean").
- -i
Display both include and linking flags for an application that uses Client-API to connect to Gliimly service. The flags are for C compiler (gcc). If "--include" option is used in addition, then only include flags are displayed. If "--link" option is used in addition, then only linking flags are displayed. Use this to automate building of client applications with tools like Makefile.
- -v
Display Gliimly version as well as the Operating System version.
- -s
Trace the execution of gg utility and display all the steps in making your application.
- -e <num of errors>
Show the last <num of errors> from the backtrace file, which receives error message and stack trace when program crashes or report-error is issued. Also display the path to backtrace file which contains the stack details.
- -t <num of trace files>
Show the last <num of trace files> most recent trace files for the application. This is useful when tracing (see trace-run) to quickly find the trace files where Gliimly writes to. Also display the path to backtrace file which contains the stack details.
- -o
Show documentation directory - web page documentation is located here in the form of a gliimdoc.html file.
- -l
Show library directory - Gliimly's libraries and v1 code processor are located there.
- -r [ --req="/<request name>[<url parameters>]"
[ --app="application path" ]
[ --service [ --remote="server:port" ] [ --socket="socket path" ] ]
[ --method="<request method>" ]
[ --content="<input file>" --content-type="<content type>" ]
[ --silent-header ]
[ --arg="<arguments>" ]
[ --exec ]
Run a command-line program, or make a service request, or display bash code to do the same for use in scripts.
If you are not in application's source code directory, then you must specify "--app" option to supply the application path (typically "/<application name>", see request). You can use "--req" option to specify the request name and optional URL parameters (see request), for example it may be:
gg -r --req="/encrypt" --exec
Copied!
to execute request "encrypt", or
gg -r --req="/encrypt/data=somedata?method=aes256" --exec
Copied!
where "/encrypt" is the request name, and "/data=somedata?method=aes256" represents the URL parameters.
Use --method to specify the HTTP request method, for instance:
gg -r --req="/encrypt/data=somedata?method=aes256" --method=POST --exec
Copied!
If not specified, the default method is "GET".
If "--service" is not used, then command-line program will execute and you can specify program arguments with "--arg" option, in which case "<arguments>" is a string (double or single quoted) that contains any number of program arguments. To specify arguments for a service see "-a" option in mgrg.
If "--service" is used, then application server will be contacted to execute a service; in this case if "--remote" is not specified, a local Unix socket is used to contact the server; otherwise "server:port" specified in "--remote" is the IP/name and port of the server to call, separated by a colon (":"). In case of a local Unix socket, the socket path is by default "/var/lib/gg/<app name>/sock/sock", where "/<app name>" is given by the last path segment in "--app" option, or if not specified it is derived from the name of a Gliimly application built in the current directory; otherwise the socket path is given by "--socket" option.
By default, the output in any case will have the HTTP headers. If you don't want those to appear, use "--silent-header" option.
If "--content" is used, then file <input file> is either piped to the standard input of a command-line program (if "--service" is not used), or sent as a content to the application server (if "--service" is used). You can also specify content type with "--content-type". For example:
gg -r -app="/my_app" --req="/some_request?par1=val1&par2=20&par3=4" --method=PATCH --content=something.json --content-type=application/json --exec
Copied!
Examples of using "-r" option to execute command-line program or to call a service:
gg -r --req="/json" --exec
gg -r --req="/json" --app="/app_name" --service --exec
gg -r --req="/json?act=perf" --app="/app_name" --service --socket="/sock_path/sock" --exec
gg -r --req="/json/act=perf" --app="/app_name" --service --remote="192.168.0.21:2301" --exec
Copied!
- Performance
"gg -r" can be used both for testing and in production, however for maximum performance, skip "--exec" option to display direct bash code that you can copy and paste to use in production. This direct code is about 300% faster than using "gg -r"; keep this in mind if performance of using "gg -r" is important. When "--exec" is not used, the output may look like this:
export CONTENT_TYPE=
export CONTENT_LENGTH=
unset GG_SILENT_HEADER
export GG_SILENT_HEADER
export REQUEST_METHOD=GET
export SCRIPT_NAME="/enc"
export PATH_INFO="/encrypt/data/somedata"
export QUERY_STRING="method=aes256"
/var/lib/gg/bld/enc/enc
Copied!
If you copy the above and paste into bash shell, it will execute the command line program which handles the request specified (which gg would do when "--exec" is specified, but not as fast). Note that SCRIPT_NAME will be set to whatever application path you use (i.e. the default or if set with "--path" option when making the application; or with "--app" option here), see request.
If you need to have run-time parameter(s) to "gg -r", escape them when displaying the direct bash code and run with "eval", for instance:
COMM=$(gg -r --req="/func_test/update-data/key=\$i/value=d_\$i" --service --remote="127.0.0.1:2301")
...
for i in {1..1000}; do
...
RES=$(eval "$COMM")
...
echo "Result is $RES"
done
Copied!
In this example, the code with "$i" variable is created, and then evaluated in a bash loop of 1000 iterations, with each execution of your service using dynamic run-time input parameter "$i", but without executing "gg -r" 1000 times.
- -u
Read stdin (standard input) and substitute any environment variables in the form of ${<var name>} with their values, and output to stdout (stdout). This is useful in processing configuration files that do not have parameter values hardcoded, but rather take them from the environment.
- -m
Add Gliimly syntax and keyword highlighting rules for files with .gliim extension to Vim editor for the currently logged on user. Note that you must have Vim installed; vi alone will not work.
- -h
Display help.
- Make application (-q), use three databases (--db) named mdb (MariaDB database), pdb (PostgreSQL) and sdb (SQLite), produce debugging information (--debug), produce tracing information (--trace):
gg -q --db="mariadb:mdb postgres:pdb sqlite:sdb" --debug --trace
Copied!
- make application, use MariaDB database db (--db), specify linker and C compilation flags, specify maximum upload size of about 18M:
gg -q --db="mariadb:db" --lflag "-Wl,-z,defs" --cflag "-DXYZ123" --maxupload 18000000
Copied!
- Make application that doesn't use any databases:
gg -q
Copied!
Gliim compiler and utility
gg
See all
documentation
Handler status
Purpose: Set handler return status.
handler-status <request status>
Copied!
handler-status specifies <request status>, which must be a number.
<request status> can be obtained with "handler-status" clause in read-remote in the service caller.
When the program runs as command-line, <request status> is program's exit code.
handler-status can be specified anywhere in the code, and does not mean exiting the request's processing; to do that, either use exit-handler or simply allow the handler to reach its end.
When handler-status is not used, the default exit code is 0. When multiple handler-status statements run in a sequence, the request status is that of the last one that executes.
If you want to specify request status and exit request processing at the same time, use exit-handler.
When the program exits, its exit code will be 12:
handler-status 12
...
exit-handler
Copied!
Program execution
exec-program
handler-status
See all
documentation
Hash string
Purpose: Hash a string.
hash-string <string> to <result> \
[ binary [ <binary> ] \
[ digest <digest algorithm> ]
Copied!
hash-string produces by default a SHA256 hash of <string> (if "digest" clause is not used), and stores the result into <result>. You can use a different <digest algorithm> in "digest" clause (for example "SHA3-256"). To see a list of available digests:
openssl list -digest-algorithms
Copied!
If "binary" clause is used without boolean variable <binary>, or if <binary> evaluates to true, then the <result> is a binary string that may contain null-characters. With the default SHA256, it is 32 bytes in length, while for instance with SHA3-384 it is 48 bytes in length, etc.
Without "binary" clause, or if <binary> evaluates to false, each binary byte of hashed string is converted to two hexadecimal characters ("0"-"9" and "a"-"f"), hence <result> is twice as long as with "binary" clause.
String "result" will have a hashed value of the given string, an example of which might look like "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855":
hash-string "hello world" to hash
Copied!
Using a different digest:
hash-string "hello world" to hash digest "sha3-384"
Copied!
Producing a binary value instead of a null-terminated hexadecimal string:
hash-string "hello world" to hash digest "sha3-384" binary
Copied!
Encryption
decrypt-data
derive-key
encrypt-data
hash-string
hmac-string
random-crypto
random-string
See all
documentation
Hmac string
Purpose: Create HMAC.
hmac-string <string> to <result> \
key <key> \
[ binary [ <binary> ] \
[ digest <digest algorithm> ]
Copied!
hmac-string produces by default a SHA256-based HMAC (Hash Message Authentication Code) of <string> (if "digest" clause is not used) using secret <key>, and stores the result into <result>. You can use a different <digest algorithm> in "digest" clause (for example "SHA3-256"). To see a list of available digests:
openssl list -digest-algorithms
Copied!
If "binary" clause is used without boolean variable <binary>, or if <binary> evaluates to true, then the <result> is a binary string that may contain null-characters. With the default SHA256, it is 32 bytes in length, while for instance with SHA3-384 it is 48 bytes in length, etc.
Without "binary" clause, or if <binary> evaluates to false, each binary byte of HMAC is converted to two hexadecimal characters ("0"-"9" and "a"-"f"), hence <result> is twice as long as with "binary" clause.
String "result" will have a HMAC value of a given string, an example of which might look like "2d948cc89148ef96fa4f1876e74af4ce984423d355beb12f7fdba5383143bee0"
hmac-string "some data" key "mykey" to result
Copied!
Using a different digest:
hmac-string "some data" key "mykey" to result digest "sha3-384"
Copied!
Producing a binary value instead of a null-terminated hexadecimal string, and then making a Base64 string out of it:
hmac-string "some data" key "mykey" digest "SHA256" to result binary
encode-base64 result to bresult
Copied!
Encryption
decrypt-data
derive-key
encrypt-data
hash-string
hmac-string
random-crypto
random-string
See all
documentation
If defined
Purpose: Conditional compilation.
if-defined <symbol>
<any code>
end-defined
if-not-defined <symbol>
<any code>
end-defined
Copied!
if-defined will cause <any code> to be compiled if <symbol> is defined (see "--cflag" option in gg). If <symbol> is not defined, then <any code> is not compiled at all.
if-not-defined will cause <any code> to be compiled if <symbol> is not defined (see "--cflag" option in gg). If <symbol> is defined, then <any code> is compiled.
The following code will have a different output depending on how is application compiled:
if-defined DEF1
@Defined
end-defined
if-not-defined DEF1
@Not defined
end-defined
Copied!
If compiled with:
gg -q
Copied!
then the output is:
Not defined
Copied!
If compiled with:
gg -q --cflag="-DDEF1"
Copied!
then the output is:
Defined
Copied!
Program flow
break-loop
code-blocks
continue-loop
do-once
exit-handler
if-defined
if-true
set-bool
start-loop
See all
documentation
If true
Purpose: Conditional statement.
if-true <condition>
<any code>
[
else-if <condition>
<any code>
] ...
[
else-if
<any code>
]
end-if
Copied!
where <condition> is:
( <comparison> [ and <comparison> [ , ... ] ] )
|
( <comparison> [ or <comparison> [ , ... ] ] )
Copied!
<comparison> is for strings:
<string> \
( equal | not-equal | \
lesser-than | lesser-equal | \
greater-than | greater-equal | \
contain | not-contain ) \
<check string> \
[ case-insensitive ] [ length <length> ]
Copied!
<comparison> is for numbers
<number> \
( equal | not-equal | \
lesser-than | lesser-equal | \
greater-than | greater-equal | \
every | not-every ) \
<check number> ...
Copied!
<comparison> is for booleans:
<boolean> ( equal | not-equal ) <check boolean> ...
Copied!
if-true will evaluate a <condition> and execute code associated with the match. If the <condition> in if-true or "else-if" succeeds, then <any code> below is executed. If it is not a match, then the next condition in "else-if" statement is evaluated, one by one until a match is found and code under that statement executes. If no match is found, then code under "else-if" statement without a <condition> executes (if specified), otherwise program control passes outside of "end-if".
A <condition> is made of one or more <comparison>s, connected by either "and" or "or" clause, but not both in the same <condition>. "and" clause uses logical AND to connect <comparisons> and it succeeds if all <comparison>s succeed. "or" clause uses logical OR to connect <comparisons> and it succeeds if at least one <comparison>s succeeds (if such a <comparison> is found, the following ones are not checked).
Each <comparison> examines either a string, a number or a boolean variable.
String variable in a comparison
If "equal", "not-equal", "lesser-than", "lesser-equal", "greater-than" or "greater-equal" clause is used, a comparison succeeds if <string> is equal, not equal, lesser, lesser or equal, greater or greater-or-equal than <check string>, respectively. If "contain" or "not-contain" clause is used, a comparison succeeds if <string> is contained or not contained in <check string>, respectively. If "case-insensitive" clause is used, a comparison is performed without case sensitivity. If "length" clause is used, only the first <length> bytes of the strings are compared.
Number variable in a comparison
If "equal", "not-equal", "lesser-than", "lesser-equal", "greater-than" or "greater-equal" clause is used, a comparison succeeds if <number> is equal, not equal, lesser, lesser or equal, greater or greater-or-equal than <check number>, respectively.
If "every" is used, then the comparison succeeds if the modulo of <number> and <check number> is 0 - this is useful in executing some code every N times but not the ones in between; with "not-every" the comparison success is this modulo is not 0 which is useful to execute code all the times except every Nth.
Boolean variable in a comparison
If "equal" or "not-equal" clause is used, a comparison succeeds if <boolean> is equal or not equal than <check boolean>, respectively.
else-if without a <condition>
With a given if-true, there can be only one "else-if" statement without a condition, and it must be the last one.
if-true can be nested, which can be up to 30 levels deep.
%% /if-test public
get-param inp
if-true inp equal "1"
@Found "1" in input
else-if inp equal "2" or inp equal "3"
@Found "2" or "3" in input
get-param inp_num
string-number inp_num to num
if-true num equal 4
@Found 4 in more input
else-if num equal 5 and inp equal "4"
@Found 5 in more input and "4" in input
else-if
@Something else
end-if
else-if
@Found something else
end-if
%%
Copied!
Program flow
break-loop
code-blocks
continue-loop
do-once
exit-handler
if-defined
if-true
set-bool
start-loop
See all
documentation
Inline code
Purpose: Inline Gliimly code in an output statement.
<<gliim code>>
Copied!
You can write Gliimly statements within an output-statement by using them between << and >> delimiters.
The following statements can be inlined: p-path,p-out, p-web, p-url, pf-out, pf-web, pf-url, p-num, current-row, number-string, string-length, call-handler.
p-out statement displays a string, and in the following code it's used to display a result within an output-statement (i.e. within "@" statement):
run-query ="select firstName, lastName from people" output firstName, lastName
@<tr>
@ <td>
@ First name is <<p-out firstName>>
@ </td>
@ <td>
@ Last name is <<p-out lastName>>
@ </td>
@</tr>
end-query
Copied!
In the code below, "some_req" is a request handler that outputs some text, and it's used inline to output "Hello world":
@Hello <<call-handler "/some-req">>
Copied!
call-handler "some-req" would simply output "world":
%% /some-req public
@world
%%
Copied!
A write-string is typically used with output statements; in this case we print the value of another string, resulting in "There is 42 minutes left!":
set-string mins="42"
(( my_string
@There is <<p-out mins>> minutes left!
))
Copied!
Language
inline-code
statements
syntax-highlighting
unused-var
variable-scope
See all
documentation
Install arch
For Arch-like distributions (like Arch or Manjaro):
- Install git
sudo pacman --noconfirm -Sy --overwrite "*" git
Copied!
- Download Gliimly
- Install dependencies
sudo pacman --noconfirm -Sy --overwrite "*" make gcc openssl curl tar 'mariadb-connector-c' fcgi 'postgresql-libs' sqlite3 pcre2
Copied!
- Build Gliimly
Use DI=1 to build debug version, leave it as is for production:
make clean
make DI=0
sudo make install
Copied!
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Install debian
Install Gliimly on Debian
For Debian-like distributions (like Ubuntu or LinuxMint):
- Install git
sudo apt update
sudo apt -y install git
Copied!
- Download Gliimly
- Install dependencies
sudo apt -y install make gcc libssl-dev curl tar libcurl4 libcurl4-openssl-dev libmariadb-dev zlib1g-dev libfcgi-bin libfcgi-dev libpq-dev libsqlite3-dev libpcre2-dev
Copied!
- Build Gliimly
Use DI=1 to build debug version, leave it as is for production:
make clean
make DI=0
sudo make install
Copied!
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Install fedora
Install Gliimly on Fedora
For Fedora-like distributions (like Rocky or RedHat):
- Install git
sudo dnf -y install git
Copied!
- Download Gliimly
- Install dependencies
- Build Gliimly
Use DI=1 to build debug version, leave it as is for production:
make clean
make DI=0
sudo make install
Copied!
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Install
Installing and un-installing Gliimly
Use the latest release for the distro you're running:
To uninstall Gliimly, click uninstall.
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Install opensuse
Install Gliimly on OpenSUSE
For OpenSUSE-like distributions:
- Install git
sudo zypper -n install git
Copied!
- Download Gliimly
- Install dependencies
sudo zypper -n install make gcc openssl-devel curl tar libcurl-devel pcre2-devel libmariadb-devel FastCGI FastCGI-devel postgresql-devel sqlite3-devel
Copied!
- Build Gliimly
Use DI=1 to build debug version, leave it as is for production:
make clean
make DI=0
sudo make install
Copied!
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Json doc
Purpose: Parse JSON text.
json-doc <text> to <json> \
[ status <status> ] [ length <length> ] [ noencode ] \
[ error-text <status> ] [ error-position <status> ] \
json-doc delete <json>
Copied!
json-doc will parse JSON <text> into <json> variable, which can be used with read-json to get the data.
The length of <text> may be specified with "length" clause in <length> variable, or if not, it will be the string length of <text>.
The "status" clause specifies the return <status> number, which is GG_OKAY if successful or GG_ERR_JSON if there is an error. The number <error position> in "error-position" clause is the byte position in <text> where error was found (starting with "0"), in which case <error text> in "error-text" clause is the error message.
String <text> is modified during parsing for performance reasons, to minimize memory copying. If you don't wish <text> to be modified, make a copy of it before parsing it (see copy-string). In many cases though, this is not necessary, allowing for better performance.
"noencode" clause will not encode strings, i.e. convert from JSON Unicode strings to UTF8, nor will it perform any validity checks on strings. This may be useful as a performance boost, however it is not recommended in general.
The maximum depth of nested structures in JSON document (i.e. objects and arrays) is 32, and the maximum length of normalized leaf node name is 1024 (see read-json for more on normalized names). There is no limit on document size.
Parse the following JSON document and display all keys and values from it. You can use them as they come along, or store them into new-array or new-index for instance for searching of large documents. This also demonstrates usage of UTF8 characters:
set-string jd unquoted = {"menu":\
{"id": "file",\
"value": 23091,\
"active": false,\
"popup":\
{"menuitem":\
[{"value": "New", "onclick": "CreateNewDoc with\uD834\uDD1Emusic"},\
{"value": "Open", "onclick": "OpenDoc() with \uD834\uDD1E\uD834\uDD1E"},\
{"value": "Close", "onclick": "\uD834\uDD1ECloseDoc()"}\
]\
}\
}\
}
json-doc jd status st error-text et error-position ep to nj
if-true st not-equal GG_OKAY
@Error [<<p-out et>>] at [<<p-num ep>>]
exit-handler -1
end-if
start-loop
read-json nj key k value v type t next
if-true t equal GG_JSON_TYPE_NONE
break-loop
end-if
@Key [<<p-out k>>]
@Value [<<p-out v>>]
@Type [<<p-num t>>]
@--------
end-loop
@
json-doc delete nj
Copied!
The output would be:
Key ["menu"."id"]
Value [file]
Type [0]
--------
Key ["menu"."value"]
Value [23091]
Type [1]
--------
Key ["menu"."active"]
Value [false]
Type [3]
--------
Key ["menu"."popup"."menuitem"[0]."value"]
Value [New]
Type [0]
--------
Key ["menu"."popup"."menuitem"[0]."onclick"]
Value [CreateNewDoc with𝄞music]
Type [0]
--------
Key ["menu"."popup"."menuitem"[1]."value"]
Value [Open]
Type [0]
--------
Key ["menu"."popup"."menuitem"[1]."onclick"]
Value [OpenDoc() with 𝄞𝄞]
Type [0]
--------
Key ["menu"."popup"."menuitem"[2]."value"]
Value [Close]
Type [0]
--------
Key ["menu"."popup"."menuitem"[2]."onclick"]
Value [𝄞CloseDoc()]
Type [0]
--------
Copied!
JSON parsing
json-doc
read-json
See all
documentation
License
Gliimly is Free Open Source software licensed under Apache License 2.
Gliimly is copyright (c) 2019-now Gliim LLC.
The following discussion is not legal advice.
Gliimly makes use of the following dynamically-linked libraries (with copyright by their respective owners), and only if your application actually uses them:
You would install these libraries yourself (they are not distributed with Gliimly) as dependencies for compiling the source code. Gliimly does not link to any outside static libraries.
Gliimly uses FNV-1a hash function, which is released in the public domain (see wikipedia page) and is not patented (see Landon Noll's web page).
Gliimly source code uses SPDX, an open ISO standard for communicating sofware bill of material information (such as licenses), to improve on open source licensing compliance for companies and communities.
License
license
See all
documentation
Lock file
Purpose: Locks file exclusively.
lock-file <file path> id <lock id> status <status>
Copied!
lock-file attempts to create file with the full path of <file path> (deleting it if existed), and exclusively lock it. If successful, no other process can do the same unless current process ends or calls unlock-file. This statement is non-blocking, thus you can check in a sleepy loop for success (see pause-program).
<file path> should be either an existing or non-existing file with a valid file path. If existing, it will be deleted.
<lock id> (in "id" clause) is a file descriptor associated with locked file.
<status> (in "status" clause) represents the status of file locking: GG_OKAY if successfully locked, GG_ERR_FAILED if cannot lock (likely because another process holds a lock), GG_ERR_INVALID if the path of the file is invalid (i.e. if it is empty), GG_ERR_CREATE if lock file cannot be created.
Generally, this statement is used for process control. A process would use lock-file to start working on a job that only one process can do at a time; once done, by using unlock-file statement, another process will be able to issue a successful lock-file call. Typically, lock-file is issued in a sleepy loop (see pause-program), waiting for a resource to be released.
Note that file lock is not carried across children processes (i.e. if your process creates children, such as with exec-program, then such children must obtain their own lock). In addition, if a process serving a request terminates before the request could issue unlock-file, the lock will be automatically released.
You can use any file name (likely choose a name that signifies the purpose of a lock, as long as you have permissions to create it), and create any number of them. This way you can create as many "binary semaphore"-like objects as you like.
%% /locktest public
get-app directory to dir
write-string fname
@<<p-out dir>>/.lock
end-write-string
set-number lockid
start-loop
lock-file fname id lockid status lockst
if-true lockst equal GG_OKAY
20
@WORKING
pause-program 20000
@DONE
break-loop
else-if lockst equal GG_ERR_FAILED
1
pause-program 1000
@WAITING
continue-loop
else-if lockst equal GG_ERR_OPEN or lockst equal GG_ERR_INVALID
@BAD LOCK
exit-handler
end-if
end-loop
unlock-file id lockid
%%
Copied!
%%
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Lower string
Purpose: Lower-case a string.
lower-string <string>
Copied!
lower-string converts all <string>'s characters to lower case.
The resulting "str" is "good":
set-string str="GOOD"
lower-string str
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Mariadb database
MariaDB configuration file is written as a MariaDB client options file.
You can see the parameters available at https://mariadb.com/kb/en/configuring-mariadb-connectorc-with-option-files/#options.
Most of the time, though, you would likely use only a few of those options, as in (for local connection):
[client]
user=myuser
password=mypwd
database=mydb
socket=/run/mysqld/mysqld.sock
Copied!
The above file has fields "user" (MariaDB user), "password" (the password for MariaDB user), "database" (the MariaDB database name) and MariaDB communication "socket" location (assuming your database is local to your computer - if the database is across network you would not use sockets!).
If you use passwordless MariaDB login (such as when the MariaDB user name is the same as your Operating System user name and where unix socket plugin is used for authentication), the password would be empty.
To get the location of the socket, you might use:
sudo mysql -u root -e "show variables like 'socket'"
Copied!
Database
begin-transaction
commit-transaction
current-row
database-config-file
db-error
mariadb-database
postgresql-database
rollback-transaction
run-query
sqlite-database
See all
documentation
Match regex
Purpose: Find, or find and replace patterns in strings using regex (regular expressions).
match-regex <pattern> in <target> \
[ \
( replace-with <replace pattern> \
result <result> \
[ status <status> ] ) \
| \
( status <status> \
[ case-insensitive [ <case-insensitive> ] ] \
[ single-match [ <single-match> ] ] \
[ utf8 [ <utf8> ] ] ) \
] \
[ cache ] \
[ clear-cache <clear cache> )
Copied!
match-regex searches <target> string for regex <pattern>. If "replace-with" is specified, then instance(s) of <pattern> in <target> are replaced with <replace pattern> string, and the result is stored in <result> string.
The number of found or found/replaced patterns can be obtained in number <status> variable (in "status" clause).
If "replace-with" is not specified, then the number of matched <pattern>s within <target> is given in <status> number, which in this case must be specified.
If "case-insensitive" is used without boolean variable <case-insensitive>, or if <case-insensitive> evaluates to true, then searching for pattern is case insensitive. By default, it is case sensitive.
If "single-match" is specified without boolean variable <single-match>, or if <single-match> evaluates to true, then only the very first occurrence of <pattern> in <target> is processed. Otherwise, all occurrences are processed.
If "utf8" is used, then the pattern itself and all data strings used for matching are treated as UTF-8 strings.
<result> and <status> variables can be created within the statement.
If the pattern is bad (i.e. <pattern> is not a correct regular expression), Gliimly will error out with a message.
By default, PCRE2 regex syntax (Perl-compatible Regular Expressions v2) is used. To use extended regex syntax (Posix ERE), specify "--posix-regex" when building your application with gg. See more below in Limitations about differences.
If "cache" clause is used, then regex compilation of <pattern> will be done only once and saved for future use. There is a significant performance benefit when match-regex executes repeatedly with "cache" (such as in case of web applications or in any kind of loop). If <pattern> changes and you need to recompile it once in a while, use "clear-cache" clause. <clear cache> is a "bool" variable; the regex cache is cleared if it is true, and stays if it is false. For example:
set-bool cl_c
if-true q equal 0
set-bool cl_c = true
end-if
match-regex ps in look_in replace-with "Yes it is \\1!" result res cache clear-cache cl_c
Copied!
In this case, when "q" is 0, cache will be cleared, and the pattern in variable "ps" will be recompiled. In all other cases, the last computed regex stays the same.
While every pattern is different, when using cache, even a relatively small pattern was seen in tests to speed up the match-regex by about 500%, or 5x faster. Use cache whenever possible as it brings parsing performance close to its theoretical limits.
Subexpressions and back-referencing
Subexpressions are referenced via a backslash followed by a number. Because in strings a backslash followed by a number is an octal number, you must use double backslash (\\). For example:
match-regex "(good).*(day)" \
in "Hello, good day!" \
replace-with "\\2 \\1" \
result res
Copied!
will produce string "Hello, day good!" as a result in "res" variable. Each subexpression is within () parenthesis, so for instance "day" in the above pattern is the 2nd subexpression, and is back-referenced as \\2 in replacement.
There can be a maximum of 23 subexpressions.
Note that backreferences to non-existing subexpressions are ignored - for example \\4 when there are only 3 subexpressions. Gliimly is "smart" about using two digits and differentiating between \\1 and \\10 for instance - it takes into account the actual number of subexpressions and their validity, and selects a proper subexpression even when two digits are used in a backreference.
Lookaheads and lookbehinds
match-regex supports syntax for lookaheads (i.e. "(?=...)" and "(?!...)") and lookbehinds (i.e. "(?<=...)" and "(?<!...)"). See PCRE2 pattern matching for more details. For instance, the following matches "bar" only if preceded by "foo":
match-regex "\\w*(?<=foo)bar" in "foobar" status st single-match
Copied!
and the following matches "foo" if followed by "bar":
match-regex "\\w*foo(?=bar)" in "foofoo" status st single-match
Copied!
If you are using older versions of PCRE2 (10.36 or earlier) such as by default in Debian 10 or Ubuntu 18, then instead of PCRE2, an Extended Regex Expressions (ERE) from the built-in Linux regex library is used, due to older PCRE2 versions having name conflicts with other libraries. In this case, "utf8" clause will have no effect, and lookaheads/lookbehinds functionality will not work; possibly a few others as well, however for the most part the two are compatible. If your system uses an older PCRE2 library, you can upgrade to 10.37 or later to use PCRE2.
In any case, if you need to use Posix ERE instead of PCRE2 (for compatibility, to reduce memory footprint or some other reason), you can use "--posix-regex" option of gg; the same limitations as above apply. Note that PCRE2 (the default) is generally faster than ERE.
Use match-regex statement to find out if a string matches a pattern, for example:
match-regex "SOME (.*) IS (.*)" in "WOW SOME THING IS GOOD HERE" status st
Copied!
In this case, the first parameter ("SOME (.*) IS (.*)") is a pattern and you're matching it with string ("WOW SOME THING IS GOOD HERE"). Since there is a match, status variable (defined on the fly as integer) "st" will be "1" (meaning one match was found) - in general it will contain the number of patterns matched.
Search for patterns and replace them by using replace-with clause, for example:
match-regex "SOME (.*) IS ([^ ]+)" in "WOW SOME THING IS GOOD HERE FOR SURE" replace-with "THINGS ARE \\2 YES!" result res status st
Copied!
In this case, the result from replacement will be in a new string variable "res" specified with the result clause, and it will be
WOW THINGS ARE GOOD YES! HERE FOR SURE
Copied!
The above demonstrates a typical use of subexpressions in regex (meaning "()" statements) and their referencing with "\\1", "\\2" etc. in the order in which they appear. Consult regex documentation for more information. Status variable specified with status clause ("st" in this example) will contain the number of patterns matched and replaced.
Matching is by default case sensitive. Use "case-insensitive" clause to change it to case insensitive, for instance:
match-regex "SOME (.*) IS (.*)" in "WOW some THING IS GOOD HERE" status st case-insensitive
Copied!
In the above case, the pattern would not be found without "case-insensitive" clause because "SOME" and "some" would not match. This clause works the same in matching-only as well as replacing strings.
If you want to match only the first occurrence of a pattern, use "single-match" option:
match-regex "SOME ([^ ]+) IS ([^ ]+)" in "WOW SOME THING IS GOOD HERE AND SOME STUFF IS GOOD TOO" status st single-match
Copied!
In this case there would be two matches by default ("SOME THING IS GOOD" and "SOME STUFF IS GOOD") but only the first would be found. This clause works the same for replacing as well - only the first occurrence would be replaced.
Regex
match-regex
See all
documentation
Memory handling
Gliimly is a memory-safe language.
Your application cannot access memory outside of valid statement results. Trying to do so will result in your program erroring out.
Memory allocated by Gliimly statements is tracked and freed at the end of the request. You can also use a memory-optimizer (see "--optimize-memory" in gg) which frees memory as soon as possible, however keep in mind that it comes with a performance penalty and is not recommended unless you have very little available memory.
With Gliimly there is no need to free memory manually. Memory is automatically freed even if it is no longer accessible to the program, thus preventing memory leaks; this is important for stability of long-running processes.
Some statements (new-index, new-array, new-list and set-string) have the option of allocating memory that won't get freed at the end of the request and is available to any request served by the same process. This kind of memory is called "process-scoped". A process-scoped string can be manually freed.
Gliimly handles memory references and assignments automatically, preventing dangling memory.
String results of any Gliimly statements will always create new memory, unless stated otherwise.
Any files opened by open-file statement are automatically closed by Gliimly at the end of the request. This enhances stability of long-running server processes because Linux system by default offers only about 1000 open files to a process. A bug can quickly exhaust this pool and cause a malfunction or a crash - Gliimly prevents this by closing any such open files when the request ends.
Memory
memory-handling
See all
documentation
Mgrg
Purpose: Run and manage services.
mgrg <options> <app name>
Copied!
mgrg (pronounced "em-greg") is a service manager. A service is started as a number of concurrent processes serving application requests, typically from reverse-proxy servers such as Apache, Nginx, HAProxy or others. Use mgrg to create Gliimly applications, including both service and command-line.
A number of options are available to setup and manage the execution of a Gliimly program as an application server, which can be accessed either via TCP/IP or a Unix domain socket.
<app name> specifies the name of your application. Each application must have a unique name. <app name> may contain alphanumeric characters and an underscore, must start with a character and its maximum length is 30.
mgrg runs as a light-weight daemon (often requiring only 100-150K of resident RAM), with a separate instance for each application specified by the <app name>. When mgrg starts your service, its current directory is set to /var/lib/gg/<app name>/app. The permissions context is inherited from the caller, so the effective user ID, group ID and any supplemental groups are that of the caller. You can use tools like runuser to specifically set the permissions context.
mgrg will re-start service processes that exited or died, keeping the number of processes as specified, unless -n option is used. The number of worker processes can be specified with a fixed (-w) option, or it can dynamically change based on the load (-d option), including none at all. Hence, it is possible to have no worker processes at all, and they will be started when incoming request(s) come in, and stay up as determined by the request load.
<options> are:
- -i
Initialize the directory and file structure for application <app name>. If you are building application from source code, this must be executed in the source code directory; mgrg will create file ".gliimapp" which identifies the application so gg can run in the directory. You must run as root when using this option (and must not run as root otherwise) - this is the only mgrg option requiring sudo. The directory structure is setup in /var/lib/gg/<app name> (see directories).
- -u <user>
The owner of your application. This is only used when initializing directory structure used by mgrg (see -i option). Do not use it otherwise. It cannot be root.
- -r <proxy group>
The group of proxy web server (such as Apache or Nginx). This is only used when initializing directory structure used by mgrg (see -i option). Do not use it otherwise. It restricts the ability to connect to your application only to the members of said group (in addition to the user who owns your server), otherwise anyone can connect.
- -f
Run in the foreground. The process does not return to the command line prompt until it is stopped. Useful for debugging and where foreground processing is required.
- -p <port number>
TCP/IP port number your service program will listen on (and accept connections), if you are using TCP/IP. You typically need to specify ProxyPass, "location" or similar FastCGI directives in your proxy web server so it can connect to your application. If you are using Client-API or call-remote, you would use "<host name>:<port number>", for instance "127.0.0.1:2301" if the server is local and <port number> is 2301. You can either use TCP/IP or Unix domain sockets (-x option). Typically, you would use Unix domain sockets if proxy web server runs on the same computer as your application server. If you specify neither -x nor -p, -x (unix domain socket) is the default. See SELinux if you are using it, as additional steps may be required.
- -x
Use Unix domain socket to connect from proxy web server to your application server. This socket is automatically created by mgrg and its full path name is "/var/lib/gg/<app name>/sock/sock" (you can connect to it via Client-API, call-remote etc.). When using a proxy web server (like Apache or Nginx), you typically need to specify ProxyPass, "location" or similar FastCGI directives so it can connect to your application. If you specify neither -x nor -p (TCP/IP socket), then -x (unix domain socket) is the default.
- -l <backlog size>
The size of socket listening backlog for incoming connections. It must be a number between 10 and SOMAXCONN-1, inclusive. The default is 400. Increase it if your system is very busy to improve performance.
- -d
Dynamically change the number of service processes ("worker" processes) to match the request load (adaptive mode). Use with "max-worker" and "min-worker" options. You cannot use -w with this option. The number of processes needed is determined based on any pending connections that are not answered by any running processes. If there are more incoming connections than processes, the number of processes will grow. If no such connections are detected (i.e. existing processes are capable of handling any incoming requests), the number of processes does not grow and will decrease to the point of minimum necessary number of workers. In that case, given release time (-t option), the number of processes will slowly decline until the incoming requests warrant otherwise. The number of running processes thus will fluctuate based on the actual load, these options, as well as --min-worker and --max-worker options. If neither -d nor -w is specified, -d is the default.
- --min-worker=<min workers>
Minimum number of service processes that run in adaptive mode (-d option). The default is 5. You can set this to 0 if needed to save memory. This option can be used only with -d option.
- --max-worker=<max workers>
Maximum number of service processes that run in adaptive mode (-d option). The default is 20. This option can be used only with -d option.
- -t <release time>
Timeout before the number of service processes is reduced to meet the reduced load. The default is 30 seconds, and it can be a value between 5 seconds and 86400 seconds (i.e. a day).
- -w <worker processes>
Number of parallel service processes ("worker" processes) that will be started. These processes do not exit; they serve incoming requests in parallel, one request per process. The number of processes should be guided by the concurrent user demand of your application. If neither -d nor -w is specified, -d is the default.
- -m <command>
Send a command to mgrg daemon serving an application. <command> can be "start" (to start service processes), "stop" (to stop them), "restart" (to restart them), "quit" (to stop mgrg daemon altogether) or "status" (to display status of mgrg).
- -n
Do not restart service processes if they exit or die. However, in adaptive mode (-d option), this option has no effect.
- -g
Do not restart service processes when their executable changes. By default, they will be automatically restarted, which is useful when in development, or upgrading the server executable.
- -a <args>
Specify any command-line arguments for your application (see "arg-count" and "arg-value" clauses in get-req). The <args> should be double-quoted as a whole, while use of single quotes to quote arguments that contain whitespaces is permitted.
- -z
Suppress HTTP headers in all service handlers in the application. This is equivalent to having silent-header implied at the beginning of each service handler. Use this option only if service is not used as a web service (i.e. the output will not have HTTP headers), or for testing or elsewhere where such headers may not be needed. Otherwise, you can use "--silent-header" option in gg or "GG_SILENT_HEADER" environment variable in Client-API to control from command-line or a client if headers are output or not.
- -s <sleep millisecs>
The basis time period (in milliseconds) that mgrg will sleep before checking for commands (specified by -m option), or check for dead service processes that need restarting. It can be between 100 and 5000 milliseconds. Smaller value will mean higher responsiveness but also higher CPU usage. The default value usually suffices, and should not be changed without due consideration.
- -e
Display verbose messages.
- -c <program>
Full absolute path to your service program. If omitted, the executable /var/lib/gg/bld/<app name>/<app name>.srvc is assumed, which is the standard Gliimly service executable. If present, but without any slashes in it to indicate path (including current directory as ./), then this executable is assumed to be /var/lib/gg/bld/<app name>/<program>.
- -v
Display mgrg version (which matches Gliimly version) as well as copyright and license.
- -h,--help
Display help.
mgrg writes log file at /var/lib/gg/<app name>/mgrglog/log file. This file is overwritten when mgrg starts, so it contains the log of the current daemon instance only.
When starting, mgrg exits with 0 if successful and 1 if it is already running. If service executable cannot run, the exit code is -1. When creating application, mgrg exits with 0 if successful, and -1 if not.
When mgrg is told to stop the application (with "-m stop" arguments), it will send SIGTERM signal to all its children. All Gliimly processes will complete the current request before exiting, assuming they are currently processing a request; otherwise they will exit immediately.
If mgrg is terminated without "-m stop", (for example with SIGKILL signal), then all its chidlren will immediately terminate with SIGKILL as well, regardless of whether they are currently processing any requests or not.
- To begin using mgrg for a specific application, you must initialize it first. For example, if your application name is "myapp" and the user who will run application is the currently logged-on user:
sudo mgrg -i -u $(whoami) myapp
Copied!
- The initialization needs to be done only once. Following the above, you can start your service application with 3 server processes:
mgrg -w 3 myapp
Copied!
- To stop your service processes:
mgrg -m stop -- myapp
Copied!
- To restart them:
mgrg -m restart -- myapp
Copied!
- To stop the server entirely (meaning to stop the resident mgrg daemon serving your particular application):
mgrg -m quit -- myapp
Copied!
- To view status of mgrg daemon for your application:
mgrg -m status -- myapp
Copied!
Running your application server on system startup
If you want your application to run on system startup (so you don't have to run it manually), you can add it to systemd configuration. Here is an example (replace <app name> with your application name and <app owner> with the name of the Operating System user under which your application is installed):
[Unit]
Description=Gliimly Service Program Manager for [<app name>] application.
After=network.target
[Service]
Type=forking
ExecStart=/usr/bin/mgrg <app name>
ExecStop=/usr/bin/mgrg -m quit <app name>
KillMode=process
Restart=on-failure
User=<app owner>
[Install]
WantedBy=multi-user.target
Copied!
The above should be saved in the directory given by the output of the following system command:
pkg-config systemd --variable=systemdsystemunitdir
Copied!
The file should be saved as <app name>.service (or similar). Once saved, you can use standard systemctl commands to start, stop and restart your service.
Service manager
mgrg
See all
documentation
New array
Purpose: Create array.
new-array <array> \
[ process-scope ] \
[ hash-size <hash size> ]
Copied!
new-array creates new array named <array>. An array is a set of key/value pairs, called "elements". A value of an element is obtained based on its key value.
Note that an array is accessible to the current request only, unless "process-scope" clause is used, in which case all requests served by a process can use it (see do-once for a typical way to create an array with a process scope).
If "process-scope" is used, then <array> will keep its data across all requests in a given process. See write-array for an example of a process-scoped array.
An array can be of any size, as long as there is enough memory for it. The "hash-size" of an array refers to the size of a hash table used to provide high-performance access to array elements based on a key.
<hash size> is the number of "buckets" used by the hash table underlying the array (it is 10 by default). All array elements with the same hash code are stored in a linked list within the same hash bucket. Greater <hash size> generally means less array elements per bucket and better performance. However, memory usage grows with a bigger hash table, so <hash size> should be balanced based on the program needs.
Gliimly uses high-performing FNV1_a hash algorithm. Each element in a bucket list is lightweight, containing pointers to a key, value and next element in the linked list.
<hash size> must be at least 10; if less, it will be set to 10.
Create a new array with a hash table with 500 buckets:
new-array h hash-size 500
Copied!
See read-array for more examples.
Array
get-array
new-array
purge-array
read-array
resize-array
write-array
See all
documentation
New fifo
Purpose: Create FIFO list.
new-fifo <list>
Copied!
new-fifo initializes new FIFO <list> (First In First Out).
<list> contains data stored on a first-in, first out basis. Note that a list is accessible to the current process only.
Information is retrieved (possibly many times, see rewind-fifo) in the same order in which it was stored.
The internal positions for write-fifo and read-fifo actions are separate so you can keep writing data to the list, and read it independently of writes in any number of retrieval passes by using rewind-fifo.
new-fifo nf
Copied!
FIFO
delete-fifo
new-fifo
purge-fifo
read-fifo
rewind-fifo
write-fifo
See all
documentation
New index
Purpose: Create new index structure for fast key searching.
new-index <index> \
[ key-as "positive integer" ] \
[ unsorted ]
[ process-scope ]
Copied!
new-index initializes a new <index>.
An index is a hierarchical balanced binary tree structure that allows data access in O(log N) time, meaning that at most approximately "log N" comparisons are needed to find a key in it. For instance, that would mean to find a key among 1,000,000 keys it would take at most about 20 comparisons. By default (if "unsorted" is omitted), finding the next lesser or greater key is O(1), meaning iterating in a sorted order is nearly instantaneous. An index is a hybrid tree structure (taking elements from both B and AVL varieties) optimized for in-memory access.
Information in an index can be inserted, updated or deleted in any order, and accessed in any order as well.
Information in an index is organized in nodes. Each node has a key and a value. A key is used to search for a node in a index. By default (if "unsorted" is omitted), all nodes in a Gliimly index are also connected in an ordered linked list, allowing for very fast range searches.
A node in a index consists of two strings: key and value. There is no limit on the number of nodes in the index, other than available memory. Each key in the index must be unique. Keys should be as short as possible. Generally, longer keys take longer to search for, insert or delete.
In order for any data index structure to function, a key comparison must be performed certain number of times in order to find a specific key. Keys are compared using C's strcmp() function, meaning using ASCII lexicographic order.
"key-as" clause allows to treat a key as something other than a string when it comes to ordering. When "positive integer" value is used, it will treat a key as a string representation of a positive integer number. Such numbers must be zero or positive integers in the 64 bit range, and they must not contain leading zeros, spaces or other prefix (or suffix) characters. For example, key strings may be "0", "123" or "891347". With this clause, sorting these strings according to their converted numerical values is much faster than using schemes such as prefixing numbers with zeros or spaces.
Note that when using "positive integer", you can use numbers in any base (from 2 to 36). In fact, numbers expressed in a higher base are generally faster to search for, because they are shorter in length.
If "unsorted" clause is used, <index> will not be sorted in a double-linked list, which means that finding the next smaller or next greater node in repetition (i.e. range search) will be done by using the index structure and not a linked list. This is slower, as each such search is generally done in O(log N) time. Regardless, you can perform range searches in either case (see use-cursor).
As a rule of thumb, if you do not need range searches or your memory is scarce, use "unsorted" as it will save 2 pointers (i.e. 16 bytes) per key and insertion/deletion will be a bit faster, but be aware that range searches will be slower.
If you need faster range searches or the extra memory is not an issue (it would be for instance extra 16MB per 1,0000,0000 keys), then do not use "unsorted", as your range searches will be faster. Note that in this case, insertion and deletion are a bit slower because they need to maintain a double linked list, however in general the effect is minimized by faster range searches.
An index is accessible to the current process only, unless "process-scope" clause is used, in which case all requests served by the process can use it (see do-once for a typical way to create an object with a process scope). If "process-scope" is used, then <index> that will keep its nodes between all requests served by the same process; otherwise <index> is purged at the end of request.
Create a new index:
new-index my_index
Copied!
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
New lifo
Purpose: Create LIFO list.
new-lifo <list>
Copied!
new-lifo initializes new LIFO <list> (Last In First Out).
<list> contains data stored on a last-in, first-out basis. Note that a list is accessible to the current process only.
Information is retrieved (possibly many times, see rewind-lifo) in the opposite order in which it was stored.
new-lifo mylifo
Copied!
LIFO
delete-lifo
new-lifo
purge-lifo
read-lifo
rewind-lifo
write-lifo
See all
documentation
New list
Purpose: Create linked list.
new-list <list> [ process-scope ]
Copied!
new-list initializes new linked <list>, where each element is connected to the previous and next ones.
In a linked <list> data that can be added anywhere in the list, and also accessed anywhere as well. Access to a list is sequential, meaning you can position to the first, last, next or previous element. Note that a list is accessible to the current process only.
Generally information is stored in a linked list, and retrieved (possibly many times) in any order later.
A list has a current position where an element can be read, updated, inserted or deleted (via read-list, write-list and delete-list), and this position can be explicitly changed with position-list.
A linked list is accessible to the current process only, unless "process-scope" clause is used, in which case all requests served by the process can use it (see do-once for a typical way to create an object with a process scope). If "process-scope" is used, then elements of the list will keep their value between requests in the same process.
See write-list for an example of a process-scoped list.
new-list mylist
Copied!
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
New message
Purpose: Create new message.
new-message <message> [ from <string> ]
Copied!
new-message will create a new <message> object.
If <string> is specified (in "from" clause), then it is used to create a <message> from it. The <string> must be in SEMI format, which may be in request's input, from get-message, from reading a file etc; in this case <message> can only be read from with read-message.
If new-message is used without "from" clause, data can be added to <message> with write-message.
begin-handler /msg public
new-message msg
write-message msg key "weather" value "nice"
write-message msg key "distance" value "near"
start-loop
read-message msg key k value v status s
if-true s not-equal GG_OKAY
break-loop
end-if
@Key is <<p-out k>> and value is <<p-out v>>
end-loop
end-handler
Copied!
Messages
get-message
new-message
read-message
SEMI
write-message
See all
documentation
New remote
Purpose: Create resources for a service call.
new-remote <service> \
( local <app name> ) | ( location <location> ) \
url-path <service URL> |
( \
app-path <app path> \
request-path <request path> \
[ url-params <url params> ] \
) \
[ request-body content <content> \
[ content-length <content length> ] \
[ content-type <content type> ] ] \
[ method <request method> ] \
[ environment <name>=<value> [ , ... ] ] \
[ timeout <timeout> ]
Copied!
new-remote will create resources needed for a service call (see call-remote); these resources are contained in variable <service>.
If "local" clause is used, then service is a Gliimly application running on the same computer, and the name of this application is string <app name>.
If "local" is not used, then you must use "location" clause. <location> (in "location" clause) is a string representing either a Unix socket or a TCP socket of a remote service, and is:
- for a Unix socket, a fully qualified name to a Unix socket file used to communicate with the service (for a Gliimly server, it's "/var/lib/gg/<app name>/sock/sock", where <app name> is the application name), or
- for a TCP socket, a host name and a port number in the form of "<host name>:<port number>", specifying where the service is listening on (for instance "127.0.0.1:2301" if the service is local and runs on TCP port 2301).
url-path or its components
If "url-path" is used, then it's a URL path to a service.
If "url-path" is not used, then you must use "app-path" and "request-path" clauses with optional "url-params" clause. <app path> string (in "app-path" clause) is the application path used to access a URL resource in service <location>, <request path> string (in "request-path" clause) is the request path used to access a URL resource in service <location>, while <url params> string (in "url-params" clause) is the URL parameters, see request.
<request method> string (in "method" clause) is a request method, such as "GET", "POST", "DELETE", "PUT" etc. The default is "GET".
Request body (i.e. body content) is specified via "request-body" clause. Within it, <content> (in "content" subclause) is the actual body content string. <content length> (in "content-length" subclause) specifies the number of bytes in <content>; by default it will be the string length of <content> (see string-length). Mandatory <content type> (in "content-type" subclause) is the body content type (for instance "application/json" or "image/jpg").
<url params> string (in "url-params" clause) is the URL parameters, see request.
<environment> (in "environment" clause) is the environment passed to a service call, in the form of "name"="value" string list where such environment elements are separated by a comma. This way you can send any environment variables to the request executed remotely. For a Gliimly server, you can access those variables in a remote request by using "environment" clause of get-sys statement. There is no limit on the number of environment variables you can use this way, other than the underlying communication library.
<timeout> (in "timeout" clause) is the number of seconds after which a service call will timeout; meaning the duration of a service call should not exceed this period of time. For no timeout, specify 0. Note that time needed for a DNS resolution of <location> is not counted in <timeout>. Maximum value is 86400 seconds. Even though it's optional, it is recommended to specify <timeout> in order to avoid a Gliimly process waiting for a very long time. Note that even if your service call times out, the actual request executing on the server may continue until it's done.
In this example, 3 service calls are created ("srv1", "srv2" and "srv3"), and they will each make a service request.
Each service request will add a key ("key1" with data "data_1", "key2" with data "data_2" and "key3" with data "data_3").
All three service calls connect via Unix socket.
A full URL path of a service request (for "srv1" for example), would be "/app/manage-keys/op=add/key=key_1/data=data_1" (note that "app" is the application name and "manage-keys" is the request handler that provides the service).
Copy this to "manage_keys.gliim" source file:
%% /manage-keys public
do-once
new-array h hash-size 1024 process-scope
end-do-once
get-param op
get-param key
get-param data
if-true op equal "add"
write-array h key (key) value data status st
@Added [<<p-out key>>]
else-if op equal "delete"
read-array h key (key) value val \
delete \
status st
if-true st equal GG_ERR_EXIST
@Not found [<<p-out key>>]
else-if
@Deleted [<<p-out val>>]
delete-string val
end-if
else-if op equal "query"
read-array h key (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found, queried [<<p-out key>>]
else-if
@Value [<<p-out val>>]
end-if
end-if
%%
Copied!
Then call-remote will make three service calls to the above request handler in parallel (i.e. as threads executing at the same time). You can examine if everything went okay, how many threads have started, and how many finished with a reply from the service (this means any kind of reply, even if an error). Finally, the output from each call is displayed (that's "data" clause in read-remote statement at the end).
Create file "srv.gliim" and copy to it this code:
begin-handler /srv public
3
new-remote srv1 local "app" \
url-path "/app/manage-keys/op=add/key=key1/data=data1" \
environment "GG_SILENT_HEADER"="yes"
new-remote srv2 local "app" \
url-path "/app/manage-keys/op=add/key=key2/data=data2" \
environment "GG_SILENT_HEADER"="yes"
new-remote srv3 local "app" \
url-path "/app/manage-keys/op=add/key=key3/data=data3" \
environment "GG_SILENT_HEADER"="yes"
3
call-remote srv1, srv2, srv3 status st \
started start \
finished-okay fok
if-true st equal GG_OKAY
@No errors from call-remote
end-if
if-true start equal 3
@All three service calls started.
end-if
if-true fok equal 3
@All three service calls finished.
end-if
read-remote srv1 data rdata1
read-remote srv2 data rdata2
read-remote srv3 data rdata3
p-out rdata1
@
p-out rdata2
@
p-out rdata3
@
end-handler
Copied!
Create the application:
sudo mgrg -i -u $(whoami) app
Copied!
Make it:
gg -q
Copied!
Run it:
mgrg -w 1 app
Copied!
gg -r --req="/srv" --exec --silent-header
Copied!
And the result is (assuming you have started hash example above):
No errors from call-remote
All three service calls started.
All three service calls finished.
Added [key1]
Added [key2]
Added [key3]
Copied!
Distributed computing
call-remote
new-remote
read-remote
run-remote
See all
documentation
Number expressions
A number expression uses operators plus (+), minus (-), multiply (*), divide (/) and modulus (%), as well as parenthesis (). For example:
set-number n1 = 10+(4*n2-5)%3
Copied!
You can use number expressions anywhere number is expected as an input to any statement.
Numbers
number-expressions
number-string
set-number
string-number
See all
documentation
Number string
Purpose: Convert number to string.
number-string <number> [ to <string> ] \
[ base <base> ] \
[ status <status> ]
Copied!
<number> is converted to <string> in "to" clause, using <base> in "base" clause, where <base> is by default 10. <base> can be between 2 and 36, inclusive. <number> can be positive or negative (i.e. signed) and can be of any integer type up to 64-bit (char, int, long, long long etc.). If "to" clause is omitted, then <number> is printed out.
Note that any letters in <string> (depending on the <base>) are always lower-case.
If there is an error, such as if <base> is incorrect, then <status> number (in "status" clause) is GG_ERR_FAILED, otherwise it's GG_OKAY.
Use of number-string (and p-num which is based on it) for converting and outputting numbers is high-performance and recommended if your application needs to do that often. If number-string prints out a number (i.e. "to" clause is omitted), and this is within write-string, then <number> is output into the buffer that builds a new string.
The following will allocate memory for string "x" to be "801":
set-number x = 801
number-string x to res
Copied!
The following will store "-238f" to string "res":
set-number x = -9103
number-string x to res base 16
Copied!
To print out a number -131:
set-number x = -131
number-string x
Copied!
Numbers
number-expressions
number-string
set-number
string-number
See all
documentation
Open file
Purpose: Open file for reading and writing.
open-file <file name> file-id <file id> \
[ new-truncate ] \
[ status <status> ]
Copied!
Opens file given by <file name> for reading and writing and creates an open file variable identified by <file id>.
<file name> can be a full path name, or a path relative to the application home directory (see directories).
You can obtain the status of file opening via <status> number (in "status" clause). The <status> is GG_OKAY if file is opened, or GG_ERR_OPEN if could not open file.
If "new-truncate" clause is used, a new file is created if it doesn't exist, or it is truncated if it does.
Create a file (or truncate an existing one), write 25,000 rows and the read back those rows and display them, then close file:
%% /ofile public
open-file "testwrite" file-id nf new-truncate
25000
start-loop repeat 25000 use i
(( line
@some text in line <<p-num i>>
)) notrim
string-length line to line_len
write-file file-id nf from line length line_len
end-loop
file-position set 0 file-id nf
25000
start-loop repeat 25000 use i
read-file file-id nf to one_item
p-out one_item
end-loop
close-file file-id nf
%%
Copied!
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Out header
Purpose: Output HTTP header.
out-header default
|
out-header use \
[ content-type <content type> ] \
[ download [ <download> ] ] \
[ etag [ <etag> ] ] \
[ file-name <file name> ] \
[ ( cache-control <cache control> ) | no-cache ] \
[ status-id <status id> ] \
[ status-text <status text> ] \
[ custom <header name>=<header value> [ , ... ] ]
Copied!
out-header outputs HTTP header and also sends any cookies produced by set-cookie and delete-cookie. A web page must have an HTTP header output before any other response.
If out-header is not used, a default HTTP header is sent out just before the very first output (see output-statement, p-out etc.) at which point any cookie updates are sent as well; this default header is the same as using "out-header default".
If you use out-header multiple times, all but the very first one are ignored.
If you wish to output a file (such as an image or a PDF document), do not use out-header; rather use send-file instead which outputs its own header.
The HTTP header is sent back to a client who initiated a request. You can specify any custom headers with "use" clause.
Default header
If no out-header is used, or if "default" clause is in place, a default header is constructed, which uses a status of 200/OK and content type of
text/html;charset=utf-8
Copied!
and cache control of
Cache-Control:max-age=0, no-cache; Pragma: no-cache
Copied!
The default header is typical for dynamically generated web pages, and most of the time you would use the default header - meaning you don't need to specify out-header statement.
Headers
The following are subclauses that allow setting any custom header:
- <content type> is content type (such as "text/html" or "image/jpg" etc.) If you are sending a file to a client for download and you don't know its content type, you can use "application/octet-stream" for a generic binary file.
- If "download" is used without boolean variable <download>, or if <download> evaluates to true, then the file is sent to a client for downloading - otherwise the default is to display file in client.
- <file name> is the name of the file being sent to a client. This is not the local file name - it is the file name that client will use for its own purposes.
- <cache control> is the cache control HTTP header. "no-cache" instructs the client not to cache. Only one of "cache-control" and "no-cache" can be used. An example of <cache control>:
send-file "somepic.jpg" headers cache-control "max-age: 3600"
Copied!
- If "etag" is used without boolean variable <etag>, or if <etag> evaluates to true, then "ETAG" header will be generated (a timestamp) and included, otherwise it is not. The time stamp is of last modification date of the file (and typically used to cache a file on client if it hasn't changed on the server). "etag" is useful to let the client know to download the file only once if it hasn't changed, thus saving network and computing resources. ETAG header is used only for send-file.
- <status id> and <status text> are status settings for the response, as strings (such as "425" for "status-id" and "Too early" for "status-text").
- To set any type of generic HTTP header, use "custom" subclause, where <header name> and <header value> represent the name and value of a single header. Multiple headers are separated by a comma. There is no limit on the maximum number of such headers, other than of the underyling HTTP protocol. You must not use "custom" to set headers already set elsewhere (such as "etag" for instance), as that may cause unpredictable behavior. For instance this sets two custom headers:
out-header use custom "CustomOption3"="CustomValue3", "Status"="418 I'm a teapot"
Copied!
"custom" subclause lets you use any custom headers that exist today or may be added in the future, as well as any headers of your own design.
You can use silent-header before output-header in order to suppress its output.
Sometimes you may want to output the default header immediately, for instance if the first output produces may take some time:
out-header default
Copied!
To set a custom header for a web page that changes cache control and adds two new headers:
out-header use content-type "text/html" cache-control "max-age:3600" custom "some_HTTP_option"="value_for_some_HTTP_option", "some_HTTP_option_1"="value_for_some_HTTP_option_1"
Copied!
Web
call-web
out-header
send-file
silent-header
See all
documentation
Output statement
Purpose: Output text.
@<text>
!<verbatim text>
Copied!
Outputting free form text is done by starting the line with "@" or "!". The text is output unencoded with a new line appended.
With "@" statement, any inline-code executes and any output from those statements is output.
With "!" statement, all text is output verbatim, and any inline code is not executed. This is useful when the text printed out should not be checked for any inline-code.
All trailing whitespaces are trimmed from each source code line. If you need to write trailing whitespaces, with "@" statement you can use p-out as inline-code. Maximum line length is 8KB - this is the source code line length, the actual run-time output length is unlimited.
Note that all characters are output as they are written, including the escape character (\). If you wish to output characters requiring an escape character, such as new line and tab (as is done in C by using \n, \t etc.), use p-out as inline-code.
Outputting "Hello there":
@Hello there
Copied!
You can use other Gliimly statements inlined and mixed with the text you are outputting:
set-string weatherType="sunny"
@Today's weather is <<p-out weatherType>>
Copied!
which would output
Today's weather is sunny
Copied!
With "!" statement, the text is also output, and this example produces the same "Hello there" output as "@":
!Hello there
Copied!
In contrast to "@" statement, "!" statement outputs all texts verbatim and does not execute any inline code:
set-string weatherType="sunny"
!Today's weather is <<p-out weatherType>>
Copied!
which would output
Today's weather is <<p-out weatherType>>
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Pause program
Purpose: Pause request execution.
pause-program <milli seconds>
Copied!
pause-program will delay request execution (i.e. sleep, meaning not utilize computing resources) for a number of <milli seconds>. For instance:
pause-program 1500
Copied!
will pause for 1.5 seconds.
Pause execution for 0.25 seconds:
pause-program 250
Copied!
Time
get-time
pause-program
See all
documentation
Pf out
Purpose: Outputs a formatted string without encoding.
pf-out <format> , <variable> [ , <variable> ]... \
[ to-error ] \
[ to <string> ]
Copied!
pf-out formats a string according to the <format> string and a list of <variable>s and then outputs the result without any encoding (meaning a string is output exactly as it is, and the client may interpret such text in any way it sees fit).
<format> string must be a literal. Variables must follow <format> separated by commas in the same order as placeholders. If you use any placeholders other than specified below, or the type of variables you use do not match the type of a correspoding placeholder in <format>, your program will error out. You can use the following placeholders in <format> (see trace-run for an example of usage):
- %s for a string
- %<number>s for a string output with a width of at least <number> (any excess filled with spaces to the left),
- %ld for a number
- %<number>ld for a number output with a width of at least <number> (any excess filled with spaces to the left)
<format> string must be present and there must be at least one <variable> (it means if you want to print out a simple string literal you still have to use "%s" as format).
If "to-error" clause is used, the output is sent to "stderr", or standard output stream.
If "to" clause is used, then the output of pf-out is stored into <string>.
To output data (the string output is "the number is 20"):
pf-out "%s is %d", "the number", 20
Copied!
Create a query text string by means of write-string statement:
/ Construct the run-time text of dynamic SQL
write-string qry_txt
@select * from <<pf-out "%s where id="%ld", table_name, id_num>>
end-write-string
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Pf url
Purpose: Outputs a URL-encoded formatted string.
pf-url <format> , <variable> [ , <variable> ]... \
[ to-error ] \
[ to <string> ]
Copied!
pf-url is the same as pf-out, except that the output is URL-encoded. This means such output is suited for use in URL parameters.
Create a URL based on arbitrary strings used as URL parameters - for instance space would be encoded as "%20" in the final output:
@<a href='<<p-path>>/update?val=<<pf-url "Purchased %s for %ld dollars", piece_desc, price>>'>Log transaction</a>
Copied!
See pf-out for more examples.
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Pf web
Purpose: Outputs a web-encoded formatted string.
pf-web <format> , <variable> [ , <variable> ]... \
[ to-error ] \
[ to <string> ]
Copied!
pf-web is the same as pf-out, except that the output is web-encoded (or HTML-encoded). This means such output is suited for use in web pages - meaning any HTML-markup will be properly encoded.
Display text containing HTML tags without them being rendered in the browser:
pf-web "We use %s markup", "<hr/>"
Copied!
See pf-out for more examples.
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
P num
Purpose: Outputs a number.
p-num <number> [ new-line ]
Copied!
p-num outputs a number given by <number> variable.
If "new-line" clause is used, then a new line ("\n") is output after <number>.
To output a number to a client:
set-number x = 100
p-num x
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Position list
Purpose: Set current element in a linked list.
position-list <list> \
[ first | last | end | previous | next | \
[ status <status> ]
Copied!
position-list changes the current element of linked <list>. A current element is the one that is read with read-list. A newly added element is written with write-list by inserting it just before the current element, thus becoming a new current element. Reading from <list> does not change its current element; use position-list to explicitly change it.
To position to the first element, use "first" clause. Use "last" clause to make the last element the current one. Use "previous" and "next" to change the current element to just before or just after it.
A position just beyond the last element in <list> is considered the "end" of it; in this case write-list will append an element to <list> and this element becomes its last, which is equivalent to using write-list statement with "append" clause.
Use "end" clause to set current element to <list>'s end. Note that "end" clause is equivalent to using "next" clause on the the last element in <list>.
If you attempt to position prior to the first element, after the end of <list>, or anywhere in an empty list, then <status> number (in "status" clause") is GG_ERR_EXIST, otherwise it is GG_OKAY. Note that if <status> is GG_ERR_EXIST, the current element will not change.
Position to the next element in list:
position-list mylist next status st
if-true st equal GG_ERR_EXIST
@Beyond the end of list
end-if
Copied!
Position to the first element in list:
position-list mylist first
Copied!
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
Postgresql database
Postgres database configuration file has a Postgres connection string.
You can see the parameters available at https://www.postgresql.org/docs/14/libpq-connect.html#LIBPQ-CONNSTRING.
Most of the time, though, you may be using only a few of those options, as in:
user=myuser password=mypwd dbname=mydb
Copied!
The above file has parameters "user" (Postgres user), "password" (the password for Postgres user), "dbname" (the Postgres database name). If you use peer-authenticated (i.e. passwordless) login, then omit "password" - this is when the Postgres user name is the same as your Operating System user name and where local unix domain socket is used for authentication.
Database
begin-transaction
commit-transaction
current-row
database-config-file
db-error
mariadb-database
postgresql-database
rollback-transaction
run-query
sqlite-database
See all
documentation
P out
Purpose: Outputs a string without encoding.
p-out <string> [ length <length> ] [ new-line ]
Copied!
p-out outputs a string expression given by <string>, without any encoding (meaning a string is output exactly as it appears).
If "length" clause is used, then only <length> leading bytes of <string> are output.
If "new-line" clause is used, then a new line ("\n") is output after <string>.
Note that all bytes of <string> are output, even if <string> contains null-bytes.
To output data verbatim to a client:
set-string mydata="Hello world"
p-out mydata
Copied!
Writing to client, outputting text followed by a horizontal rule - the text is output to a client (such as browser) as it is, and the browser will interpret tags "<br/>" and "<hr/>" as a line break and a horizonal line and display them as such:
p-out "This is a non-encoded output<br/>" new-line
p-out "<hr/>"
Copied!
Create a query text string by means of write-string statement:
write-string qry_txt
@select * from <<p-out table_name>>
end-write-string
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
P path
Purpose: Outputs URL application path.
p-path [ new-line ]
Copied!
p-path outputs a URL application path (see request), i.e. the leading path segment(s) prior to request name.
If no "--path" option in gg is used to specify URL application path, then it is the same as application name prepended with a forward slash:
/<app name>
Copied!
p-path provides the leading part of a URL path after which request name and its parameters can be specified. It is used in HTML forms and URLs (either for HTML or API) to refer back to the same application.
Use p-path to create the absolute URL path to refer back to your service so you can issue requests to it.
For example, this is a link that specifies request to service "show-notes":
@<a href="<<p-path>>/show-notes?date=yesterday">Show Notes</a>
Copied!
If you are building HTML forms, you can add a note with:
@<form action="<<p-path>>/add-note" method="POST">
@<input type="text" name="note" value="">
@</form>
Copied!
See request for more on URL structure.
If "new-line" clause is used, then a new line ("\n") is output after the path.
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
P source file
Purpose: Outputs the file name of the current source file.
p-source-file [ new-line ]
Copied!
p-source-file outputs the file name (relative to the source code directory) of the source file where the statement is located; this is often used for debugging.
If "new-line" clause is used, then a new line ("\n") is output afterwards.
@This file is <<p-source-file>>
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
P source line
Purpose: Outputs current line number in the source file.
p-source-line [ new-line ]
Copied!
p-source-line outputs the line number in the source file where the statement is located. It is often used for debugging purposes.
If "new-line" clause is used, then a new line ("\n") is output afterwards.
@This line is #<<p-source-line>>
Copied!
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Purge array
Purpose: Purge an array.
purge-array <array>
Copied!
purge-array deletes all elements from <array> table that was created with new-array.
After purge-array, you can use it without calling new-array again. Note however, that "average-reads" statistics (see get-array) is not reset - it keeps being computed and remains for the life of the array.
Create array, put some data in it and then delete the data:
new-array h
write-array h key "mykey" value "myvalue"
purge-array h
Copied!
See read-array for more examples.
Array
get-array
new-array
purge-array
read-array
resize-array
write-array
See all
documentation
Purge fifo
Purpose: Delete FIFO list data.
purge-fifo <list>
Copied!
purge-fifo will delete all elements from the FIFO <list>, created by new-fifo. The list is then empty and you can still put data into it, and get data from it afterwards, without having to call new-fifo again.
All keys or values stored in the list are also deleted.
See read-fifo.
FIFO
delete-fifo
new-fifo
purge-fifo
read-fifo
rewind-fifo
write-fifo
See all
documentation
Purge index
Purpose: Delete all index nodes.
purge-index <index>
Copied!
purge-index will delete all <index> nodes; <index> must have been created with new-index. All of <index>'s nodes, and their keys/values all deleted.
After purge-index, the index is empty and you can use it again (write into it, read from it etc.).
Delete all index data:
new-index myindex
...
purge-index myindex
Copied!
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
Purge lifo
Purpose: Delete LIFO list data.
purge-lifo <list>
Copied!
purge-lifo will delete all elements from the LIFO <list> created by new-lifo, including all keys and values. The list is then empty and you can still put data into it, and get data from it afterwards, without having to call new-lifo again.
See read-lifo.
LIFO
delete-lifo
new-lifo
purge-lifo
read-lifo
rewind-lifo
write-lifo
See all
documentation
Purge list
Purpose: Delete linked list data.
purge-list <list>
Copied!
purge-list will delete all elements (including their keys and values) from the linked <list>, created by new-list. The list is then empty and you can still put data into it, and get data from it afterwards, without having to call new-list again.
See read-list.
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
P url
Purpose: Outputs a URL-encoded string.
p-url <string> [ length <length> ] [ new-line ]
Copied!
p-url is the same as p-out, except that the output is URL-encoded. This means such output is suited for use in URL parameters.
If "length" clause is used, then only <length> leading bytes of <string> are URL-encoded and then output.
If "new-line" clause is used, then a new line ("\n") is output after encoded <string>.
Create a URL based on arbitrary strings used as URL parameters - for instance a space would be encoded as "%20" in the final output:
@<a href='<<p-path>>/update?item=<<p-url item_name>>'>Update</a>
Copied!
See p-out for more examples.
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
P web
Purpose: Outputs a web-encoded string.
p-web <string> [ length <length> ] [ new-line ]
Copied!
p-web is the same as p-out, except that the output is web-encoded (or HTML-encoded). This means such output is suited for use in web pages - the text will be displayed verbatim without HTML-markup being interpreted.
If "length" clause is used, then only <length> leading bytes of <string> are web-encoded and then output.
If "new-line" clause is used, then a new line ("\n") is output after encoded <string>.
Display "We use <hr/> markup" text, without "<hr/>" being displayed as a horizontal line:
p-web "We use <hr/> markup"
Copied!
See p-out for more examples.
Output
finish-output
flush-output
output-statement
pf-out
pf-url
pf-web
p-num
p-out
p-path
p-source-file
p-source-line
p-url
p-web
See all
documentation
Random crypto
Purpose: Obtain a random string for cryptographic use.
random-crypto to <random string> \
[ length <string length> ]
Copied!
random-crypto obtains a random string of length <string length>. This statement uses a cryptographically secure pseudo random generator (CSPRNG) from OpenSSL library. If "length" clause is omitted, the length is 20 by default.
The value generated is always binary and may contain null-characters and is null-terminated.
Use this statement only when needed for specific cryptographic uses. In all other cases, use random-string which is considerably faster.
Get a 20-digit long random binary value:
random-crypto to str length 20
Copied!
Encryption
decrypt-data
derive-key
encrypt-data
hash-string
hmac-string
random-crypto
random-string
See all
documentation
Random string
Purpose: Obtain a random string.
random-string to <random string> \
[ length <string length> ] \
[ number | binary ]
Copied!
random-string obtains a random string of length <string length>. If "length" clause is omitted, the length is 20 by default.
If "number" clause is used, then the resulting string is composed of digits only ("0" through "9").
If "binary" clause is used, then the resulting string is binary, i.e. each byte can have an unsigned value of 0-255.
By default, if neither "number" or "binary" is used, the resulting string is alphanumeric, i.e. digits ("0" through "9") and alphabet letters ("a"-"z" and "A"-"Z") are used only.
Random generator is based on the Linux random() generator seeded by local process properties such as its PID and time. A single process is seeded once, and thus any number of requests served by the same process will use a subset of the process' random sequence. Due to joint entropy, each result given to any request is random, not just within a single request, but among any number of different requests.
Get a 100-digit long random value (as an alphanumeric string):
random-string to str length 100
pf-out "%s\n", str
Copied!
Get a random number of length 10 in string representation:
random-string to str length 10 number
pf-out "%s\n", str
Copied!
Get a random binary value that is 8 bytes in length - this value may contain null bytes (i.e. it will contain bytes with values ranging from 0 to 255):
random-string to str length 8 binary
Copied!
Encryption
decrypt-data
derive-key
encrypt-data
hash-string
hmac-string
random-crypto
random-string
See all
documentation
Read array
Purpose: Get data from array.
read-array <array> \
key <key> \
value <value> \
[ delete [ <delete> ] ] \
[ status <status> ]
read-array <array> traverse begin
read-array <array> traverse \
key <key> \
value <value> \
[ delete [ <delete> ] ] \
[ status <status> ] \
Copied!
Without "traverse" clause
read-array will obtain an element from <array>, which is a string <value> (in "value" clause) based on a string <key> (in "key" clause). <array> was created by new-array.
You can also delete an element from the array by using "delete" clause - the <value> is still obtained though it is no longer in the array table. The array element is deleted if "delete" clause is used without boolean variable <delete>, or if <delete> evaluates to true.
If no <key> was found in the array table, <status> number (in "status" clause) is GG_ERR_EXIST and <value> is unchanged, otherwise <status> is GG_OKAY.
read-array with "traverse" clause obtains <key> and <value> of the current element, and then positions to the next one. You can also delete this element from the array by using "delete" clause - the <key> and <value> are still obtained though the element is no longer in the array table. The array element is deleted if "delete" clause is used without boolean variable <delete>, or if <delete> evaluates to true.
Use "begin" clause to position at the very first element. This is useful if you wish to get all the key/value pairs from a array table - note they are not extracted in any particular order. When there are no more elements, <key> and <value> are unchanged and <status> number (in "status" clause) is GG_ERR_EXIST, otherwise <status> is GG_OKAY.
You may search, add or delete elements while traversing a array table, and this will be reflected in all elements not yet traversed.
In this example, new array is created, a key/value pair is written to it, and then the value is obtained and the element deleted; return status is checked:
new-array h
write-array h key "X0029" value "some data"
read-array h key "X0029" value res status f delete
if-true f equal GG_ERR_EXIST
@No data in array!
else-if
@Deleted value is <<p-out res>>
end-if
Copied!
The following will traverse the entire array and display all the data:
read-array h traverse begin
start-loop
read-array h traverse key k value r status f
if-true f equal GG_ERR_EXIST
break;
end-if
pf-out "Key [%s] data [%s]\n", k, r
end-loop
Copied!
Array
get-array
new-array
purge-array
read-array
resize-array
write-array
See all
documentation
Read fifo
Purpose: Reads key/value pair from a FIFO list.
read-fifo <list> \
key <key> \
value <value> \
[ status <status> ]
Copied!
read-fifo retrieves an element from the FIFO <list> into <key> string (in "key" clause) and <value> string (in "value" clause).
Once an element has been retrieved, the next use of read-fifo will obtain the following one, in the same order they were put in. read-fifo starts with the first element put in, and moves forward from there, unless rewind-fifo is called, which positions back to the first one.
If the element is successfuly retrieved, <status> number (in "status" clause) is GG_OKAY, otherwise it is GG_ERR_EXIST, which means there are no more elements to retrieve.
In this example, a FIFO list is created, and two key/value pairs added. They are then retrieved in a loop and printed out (twice with rewind), and then the list is purged:
new-fifo mylist
write-fifo mylist key "key1" value "value1"
write-fifo mylist key "some2" value "other2"
start-loop
read-fifo mylist key k value v status st
if-true st not-equal GG_OKAY
break
end-if
@Obtained key <<p-out k>> with value <<p-out v>>
end-loop
rewind-fifo mylist
start-loop
read-fifo mylist key k value v status st
if-true st not-equal GG_OKAY
break
end-if
@Again obtained key <<p-out k>> with value <<p-out v>>
end-loop
purge-fifo mylist
Copied!
FIFO
delete-fifo
new-fifo
purge-fifo
read-fifo
rewind-fifo
write-fifo
See all
documentation
Read file
Purpose: Read file into a string variable.
read-file <file> | ( file-id <file id> ) \
to <content> \
[ position <position> ] \
[ length <length> ] \
[ status <status> ]
Copied!
This is a simple method of reading a file. File named <file> is opened, data read, and file is closed.
<file> can be a full path name, or a path relative to the application home directory (see directories).
Data read is stored into string <content>. Note that file can be binary or text and <content> may have null-bytes.
If "position" and "length" clauses are not specified, read-file reads the entire <file> into <content>.
If "position" clause is used, then reading starts at byte <position>, otherwise it starts at the beginning of the file. Position of zero (0) represents the beginning of the file.
If "length" clause is used, then <length> number of bytes is read, otherwise the rest of the file is read. If <length> is 0, <content> is empty string and <status> is 0.
If "status" clause is used, then the number of bytes read is stored to <status>, unless error occurred, in which case <status> is negative and has the error code. The error code can be GG_ERR_POSITION (if <position> is negative, outside the file, or file does not support it), GG_ERR_READ (if <length> is negative or there is an error reading file) or GG_ERR_OPEN if file is not open.
This method uses a <file id> that was created with open-file. You can then read (and write) file using this <file id> and the file stays open until close-file is called, or the request ends (i.e. Gliimly will automatically close any such open files).
Data read is stored into string <content>. Note that file can be binary or text and <content> may have null-bytes.
If "position" clause is used, then data is read starting from byte <position> (with position of 0 being the first byte), otherwise reading starts from the current file position determined by the previous reads/writes or as set by using "set" clause in file-position. Note that after each read or write, the file position is advanced by the number of bytes read or written.
If "length" clause is used, then <length> number of bytes is read, otherwise the rest of the file is read. If <length> is 0, <content> is empty string and <status> is 0.
Note that when you reach the end of file and no more bytes can be read, <status> is 0.
If "status" clause is used, then the number of bytes read is stored to <status>, unless error occurred, in which case <status> has the error code. The error code can be GG_ERR_POSITION (if <position> is negative, outside the file, or file does not support it), GG_ERR_READ (if <length> is negative or there is an error reading file) or GG_ERR_OPEN if file is not open.
To read the entire file and create both the variable that holds its content and the status variable:
read-file "/home/user/some_file" to file_content status st
if-true st greater-than 0
@Read:
@<hr/>
p-web file_content
@<hr/>
else-if
@Could not read (<<pf-out "%ld", st>>)
end-if
Copied!
To read 10 bytes starting at position 20 (with position 0 being the first byte):
read-file "/home/user/some_file" to file_content position 20 length 10
Copied!
See open-file for an example with "file-id" clause.
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Read index
Purpose: Search/update an index.
read-index <index> \
( equal <search key> | lesser <search key> | greater <search key> | \
lesser-equal <search key> | greater-equal <search key> | \
min-key | max-key ) \
[ value <value> ] \
[ update-value <update value> ] \
[ key <key> ] \
[ status <status> ] \
[ new-cursor <cursor> ]
Copied!
read-index will search <index> (created with new-index) for a node with the string key that is:
- equal to <search key> ("equal" clause)
- lesser than <search key> ("lesser" clause)
- greater than <search key> ("greater" clause)
- lesser or equal than <search key> ("lesser-equal" clause)
- greater or equal than <search key> ("greater-equal" clause)
- a minimum key in the index ("min-key" clause)
- a maximum key in the index ("max-key" clause)
The <status> in "status" clause will be GG_OKAY if a key conforming to one of these criteria is found, and GG_ERR_EXIST if not.
If a key is found, the value associated with the key can be obtained with "value" clause in <value>; an existing key used to originally insert this value into the index can be obtained with "key" clause in string <key>. If a key is not found, both <value> and <key> are unchanged.
You can update the value associated with a found key with "update-value" clause by specifying <update value> string. This update is performed after <value> has been retrieved, allowing you to obtain the previous value in the same statement.
If you'd like to iterate the ordered list of keys in an index, create a <cursor> by using "new-cursor" clause, in which case <cursor> will be positioned on a found index node. See use-cursor for more on using cursors. Cursors are useful in range searches; typically you'd find a key that is an upper or lower bound of a range and then keep iterating to a lesser or greater value until some criteria is met, such as when the opposite bound is found. Gliimly indexes are by default constructed so that such iterations are O(1) in complexity, meaning each is a single index node access (see new-index).
In this example, a million key/value pairs are inserted into an index, and then each of them is searched for and then displayed back (see write-index for more on inserting into a index). Both the key and the data are a numerical value of a key:
%% /index-example public
new-index myindex key-as "positive integer"
set-number i
start-loop use i start-with 0 repeat 1000000
number-string i to key
set-string data=key
write-index myindex key (key) value data
end-loop
start-loop use i start-with 0 repeat 1000000
number-string i to key
read-index myindex equal (key) status st value data
if-true st not-equal GG_OKAY
@Could not find key <<p-out key>>
else-if
@Found data <<p-out data>> associated with key <<p-out key>>
end-if
delete-string key
end-loop
%%
Copied!
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
Read json
Purpose: Read data elements of JSON document.
read-json <json> \
[ key <key> ] \
[ value <value> ] \
[ type <type> ] \
[ next ]
Copied!
read-json reads data elements from <json> variable, which is created with json-doc. A data element is a string <key>/<value> pair of a leaf node, where key (in "key" clause) is a normalized key name, which is the value's name preceded with the names of all objects and array members leading up to it, separated by a dot (".").
The actual <value> is obtained with "value" clause, and the <type> of value can be obtained with "type" clause.
<type> is a number that can be GG_JSON_TYPE_STRING, GG_JSON_TYPE_NUMBER, GG_JSON_TYPE_REAL, GG_JSON_TYPE_BOOL and GG_JSON_TYPE_NULL for string, number, real (floating point), boolean and null values respectively. Note that <value> is always a string representation of these types.
Use "next" clause to move to the next sequential key/value pair in the document, from top down. Typically, you would get a key first, examine if it's of interest to you, and then obtain value. This is because Gliimly uses "lazy" approach where value is not copied until needed; with this approach JSON parsing is faster.
If there are no more data elements to read, <type> is GG_JSON_TYPE_NONE.
<key> in "key" clause is a normalized name of any given leaf node in JSON text. This means every non-leaf node is included (such as arrays and objects), separated by a dot ("."), and arrays are indexed with "[]". An example would be:
"menu"."popup"."menuitem"[1]."onclick"
Copied!
See json-doc.
JSON parsing
json-doc
read-json
See all
documentation
Read lifo
Purpose: Reads key/value pair from a LIFO list.
read-lifo <list> \
key <key> \
value <value> \
[ status <status> ]
Copied!
read-lifo retrieves an element from the LIFO <list> into <key> string (in "key" clause) and <value> string (in "value" clause).
Once an element has been retrieved, the next use of read-lifo will obtain the following one, in the reverse order they were put in. read-lifo starts with the last element put in, and moves backwards from there, unless rewind-lifo is called, which positions back to the last one. Note that write-lifo will cause the next read-lifo to start with the element just written, i.e. it implicitly calls rewind-lifo.
If the element is successfuly retrieved, <status> number (in "status" clause) is GG_OKAY, otherwise it is GG_ERR_EXIST, which means there are no more elements to retrieve.
In this example, a LIFO list is created, and two key/value pairs added. They are then retrieved in a loop and printed out (twice with rewind), and then the list is purged:
%% /lifo public
new-lifo mylist
write-lifo mylist key "key1" value "value1"
write-lifo mylist key "some2" value "other2"
start-loop
read-lifo mylist key k value v status st
if-true st not-equal GG_OKAY
break-loop
end-if
@Obtained key <<p-out k>> with value <<p-out v>>
end-loop
rewind-lifo mylist
start-loop
read-lifo mylist key k value v status st
if-true st not-equal GG_OKAY
break-loop
end-if
@Again obtained key <<p-out k>> with value <<p-out v>>
end-loop
purge-lifo mylist
rewind-lifo mylist
read-lifo mylist key k value v status st
if-true st not-equal GG_OKAY
@LIFO is empty
end-if
%%
Copied!
LIFO
delete-lifo
new-lifo
purge-lifo
read-lifo
rewind-lifo
write-lifo
See all
documentation
Read line
Purpose: Read text file line by line in a loop.
read-line <file> to <line content> [ status <status> ] [ delimiter <delimiter> ]
<any code>
end-read-line
Copied!
read-line starts the loop in which a text <file> is read line by line into string <line content>, with end-read-line ending this loop. Once the end of <file> has been reached, or an error occurs, the loop exits.
<file> can be a full path name, or a path relative to the application home directory (see directories).
<status> number will be GG_ERR_READ if there is an error in reading file, or GG_ERR_OPEN if file cannot be opened, or GG_OKAY if end-of-file has been reached. Check for error after end-read-line statement. If a line was read successfully, then <status> is its length.
<line content> is allocated when a line is read and freed just before the next line is read or if there are no more lines to read. If you want to use <line content> outside of this scope, save it or stash it somewhere first.
String <delimiter> separates the lines in the file, and is by default new line, however it can be any character (note that it is a first character of string <delimiter>).
A new line (or a <delimiter>) remains in <line content> if it was present in the file (note that the very last line may not have it).
Use break-loop and continue-loop statements to exit and continue the loop.
To read a text file line by line, and display as a web page with line breaks:
read-line "/home/bear/tmp/ll/filexx" to one_line status st
string-length one_line to line_len
@Line length is <<p-num line_len>>, line is <<p-web one_line>> status <<p-num st>><br/>
end-read-line
if-true st lesser-than 0
get-req error to err
@Error in reading, error [<<p-out err>>]
end-if
Copied!
To read a text file delimited by "|" character:
read-line "/home/user/dir/file" to one_line status len delimiter '|'
...
Copied!
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Read list
Purpose: Read/update key/value pair from a linked list.
read-list <list> \
key <key> \
value <value> \
[ update-value <update value> ] [ update-key <update key> ] \
[ status <status> ]
Copied!
read-list retrieves an element from the linked <list>, storing it into <key> string (in "key" clause) and <value> string (in "value" clause). After each read-list, the list's current element remains at the element read; use position-list to move it (for instance to the next one).
If an element could not be retrieved, <status> number (in "status" clause) will be GG_ERR_EXIST and <key> and <value> will be unchanged (this can happen if current list element is beyond the last element, such as for instance if "end" clause is used in position-list statement), otherwise <status> is GG_OKAY.
Initially when the list is created with new-list, read-list starts with the first element in the list. Use position-list to change the default list's current element.
You can update the element's value with "update-value" clause by specifying <update value> string. This update is performed after a <value> has been retrieved, allowing you to obtain the previous value in the same statement.
You can update the element's key with "update-key" clause by specifying <update key> string. This update is performed after a <key> has been retrieved, allowing you to obtain the previous key in the same statement.
In this example, a linked list is created, and three key/value pairs added. They are then retrieved from the last towards the first element, and then again in the opposite direction:
new-list mylist
write-list mylist key "key1" value "value1"
write-list mylist key "key2" value "value2"
write-list mylist key "key3" value "value3"
position-list mylist last
start-loop
read-list mylist key k value v
@Obtained key <<p-out k>> with value <<p-out v>>
position-list mylist previous status s
if-true s equal GG_ERR_EXIST
break-loop
end-if
end-loop
start-loop
read-list mylist key k value v status s
if-true s equal GG_ERR_EXIST
break-loop
end-if
@Again obtained key <<p-out k>> with value <<p-out v>>
position-list mylist next
end-loop
purge-list mylist
Copied!
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
Read message
Purpose: Read key/value from message.
read-message <message> \
key <key> \
value <value> \
[ status <status>
Copied!
read-message reads strings <key> (in "key" clause) and <value> (in "value" clause) from <message>, which must have been created with new-message.
The reading of key/value pairs starts from the beginning of message and proceeds sequentially forward. Once a key/value pair is read it cannot be read again.
<status> number (in "status" clause) will be GG_OKAY for a successful read, GG_ERR_FORMAT if message is not in SEMI format or GG_ERR_LENGTH if message isn't of proper length.
Once a message is read from, it cannot be written to (see write-message).
See new-message.
Messages
get-message
new-message
read-message
SEMI
write-message
See all
documentation
Read remote
Purpose: Get results of a service call.
read-remote <service> \
[ data <data> ] \
[ error <error> ] \
[ status <status> ] \
[ status-text <status text> ] \
[ handler-status <service status> ]
Copied!
Use read-remote to get the results of call-remote created in new-remote; the same <service> must be used in all.
- Getting the reply from server
The service reply is split in two. One part is the actual result of processing (called "stdout" or standard output), and that is "data". The other is the error messages (called "stderr" or standard error), and that's "error". The standard output goes to "data", except from report-error and pf-out/pf-url/pf-web (with "to-error" clause) which goes to "error". Note that "data" and "error" streams can be co-mingled when output by the service, but they will be obtained separately. This allows for clean separation of output from any error messages.
<data> is the "data" reply of a service call (in "data" clause). <error> is the "error" reply (in "error" clause).
- Getting status of a service call
The status of a service call (as a number) can be obtained in <status> (in "status" clause). This is the protocol status, and it may be:
- GG_OKAY if request succeeded,
- GG_CLI_ERR_RESOLVE_ADDR if host name for TCP connection cannot be resolved,
- GG_CLI_ERR_PATH_TOO_LONG if path name of Unix socket is too long,
- GG_CLI_ERR_SOCKET if cannot create a socket (for instance they are exhausted for the process or system),
- GG_CLI_ERR_CONNECT if cannot connect to server (TCP or Unix alike),
- GG_CLI_ERR_SOCK_WRITE if cannot write data to server (for instance if server has encountered an error or is down, or if network connection is no longer available),
- GG_CLI_ERR_SOCK_READ if cannot read data from server (for instance if server has encountered an error or is down, or if network connection is no longer available),
- GG_CLI_ERR_PROT_ERR if there is a protocol error, which indicates a protocol issue on either or both sides,
- GG_CLI_ERR_BAD_VER if either side does not support protocol used by the other,
- GG_CLI_ERR_SRV if server cannot complete the request,
- GG_CLI_ERR_UNK if server does not recognize record types used by the client,
- GG_CLI_ERR_OUT_MEM if client is out of memory,
- GG_CLI_ERR_ENV_TOO_LONG if the combined length of all environment variables is too long,
- GG_CLI_ERR_ENV_ODD if the number of supplied environment name/value pairs is incorrect,
- GG_CLI_ERR_BAD_TIMEOUT if the value for timeout is incorrect,
- GG_CLI_ERR_TIMEOUT if the request timed out based on "timeout" parameter or otherwise if the underlying Operating System libraries declared their own timeout.
You can also obtain the status text in <status text> (in "status-text" clause); this is a human readable status message which is am empty string (i.e. "") if there is no error (meaning if <status> is GG_OKAY).
- Getting service status
<service status> (in "handler-status" clause) is the return status (as a number) of the code executing a remote service handler; it is conceptually similar to a return value from a function (as a number). The particular service handler you are calling may or may not return the status; it it does, its return status can be sent back via handler-status and/or exit-handler statement.
You must specify at least one value to obtain in read-remote, or any number of them.
See examples in new-remote and call-remote.
Distributed computing
call-remote
new-remote
read-remote
run-remote
See all
documentation
Read split
Purpose: Obtain split string pieces.
read-split <piece number> from <split string> to <piece> [ status <status> ]
Copied!
read-split will read split string pieces from <split string> which is produced by split-string. <piece number> is the number (starting with 1) of the piece to retrieve in string <piece>. <status> number (in "status" clause) is GG_OKAY if successful, or GG_ERR_OVERFLOW if <piece number> is not valid (meaning it's outside of the range of pieces parsed by split-string).
See split-string.
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Rename file
Purpose: Renames a file.
rename-file <from file> to <to file> [ status <status> ]
Copied!
rename-file will rename <from file> to <to file>. <status> number is GG_OKAY on success and GG_ERR_RENAME on failure.
<from file> and <to file> must be specified with full paths unless they are in the current working directory (see directories), in which case a name alone will suffice. <from file> and <to file> can be in different directories.
Rename files:
rename-file "/home/u1/d1/f1" to "/home/u1/d2/f2" status st
if-true st equal GG_OKAY
@Rename successful. <br/>
end-if
Copied!
Rename files in the current working directory:
rename-file "f1" to "f2" status st
if-true st equal GG_OKAY
@Rename successful. <br/>
end-if
Copied!
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Replace string
Purpose: Replaces part of string.
replace-string <string> \
( copy <replacement> ) | ( copy-end <replacement> ) \
[ start-with <start with> ] \
[ length <length> ]
Copied!
replace-string will replace part of <string> with <replacement> string. "copy" clause will make a replacement in the leading part of <string>, while "copy-end" will make a replacement in the trailing part of <string>.
If "length" clause is not used, then the entire <replacement> string is used, otherwise only the <length> leading bytes of it.
If "start-with" clause is used, then <replacement> will be copied starting with byte <start with> in <string> ("0" being the first byte) (with "copy" clause) or starting with <start with> bytes prior to the end of <string> (with "copy-end" clause).
If "start-with" clause is not used, then <replacement> will replace the leading part of <string> (with "copy" clause") or the very last part of <string> (with "copy-end" clause). In either case, the number of bytes copied is determined by whether "length" clause is used or not.
If either "start-with" or "length" is negative, it's the same as if not specified.
After replace-string below, string "a" will be "none string is here":
set-string b="none"
set-string a="some string is here"
replace-string a copy b
Copied!
After replace-string below, string "a" will be "some string is none":
set-string b="none"
set-string a="some string is here"
replace-string a copy-end b
Copied!
In this example, "a" will be "somnontring is here":
set-string b="none"
set-string a="some string is here"
replace-string a copy b start-with 3 length 3
Copied!
In the following example, "a" will be "some string inohere":
set-string b="none"
set-string a="some string is here"
replace-string a copy-end b start-with 6 length 2
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Report error
Purpose: Reports a fatal error.
report-error <format>, <variable> [ , ... ]
Copied!
report-error will report a fatal error. It will format an error message according to the <format> string and a list of <variable>s and then write it in the trace file (see directories); this happens regardless of whether tracing is enabled or not.
See error-handling when report-error is called.
<format> string must be present and there must be at least one <variable> (it means if you want to trace a simple string literal you still have to use "%s" as format). The reason for this is to avoid formatting errors, and to use formatting in a consistent fashion.
<format> string must be a literal. Variables must follow <format> separated by commas in the same order as placeholders. If you use any placeholders other than specified below, or the type of variables you use do not match the type of a correspoding placeholder in <format>, your program will error out. You can use the following placeholders in <format> (see trace-run for an example of usage):
- %s for a string
- %<number>s for a string output with a width of at least <number> (any excess filled with spaces to the left),
- %ld for a number
- %<number>ld for a number output with a width of at least <number> (any excess filled with spaces to the left)
report-error "Too many input parameters for %s, encountered total of [%ld]", "customer", num_count
Copied!
Error handling
db-error
error-code
error-handling
report-error
See all
documentation
Request body
Purpose: Get the body of an HTTP request.
request-body <request body>
Copied!
request-body stores the request body of an HTTP request into string <request body>.
If the content type of the request is "multipart/form-data", the request body is empty because all the data (including any attached files) can be obtained by using get-param (see file-uploading for files). In all other cases, request body is available.
Typical use of request-body is when some text or binary information is attached to the request, such as JSON for example, though it can be anything else, for example an image, some text, or a PDF document. Usually request body is present for POST, PUT or PATCH requests, but you can also obtain it for GET or DELETE requests, if supplied (for instance identifying a resource may require more information than can fit in a query string), or for any custom request method.
String variable "reqb" will hold request body of a request:
request-body reqb
Copied!
Request data
get-param
request-body
set-param
See all
documentation
Request
Gliimly applications run by processing requests. A request always takes form of an HTTP request, meaning a URL, an optional HTTP request body, and any environment variables. This is regardless of whether it's a service or a command-line program.
A "request URL" is a URL that an outside caller (such as a web browser) uses to execute your Gliimly code. Aside from the scheme, domain and port, it's made up of:
- application path,
- request path, and
- URL parameters.
Here's a breakdown of URL structure:
<scheme>://<domain>[:<port>]<application path><request path><parameters>
Copied!
For example, in the following URL:
https://your.website.com/my-app/my-request/par1=val1/par2=val2
Copied!
"/my-app" is application path, "/my-request" is request path and "/par1=val1/par2=val2" are parameters "par1" and "par2" with values "val1" and "val2". Together, application path and request path are called URL path.
The leading part of URL's path is called "application path". By default, application path is the application name (see mgrg with "-i" option) preceded by forward slash ("/"); if it's "shopping", then the default application path is:
/shopping
Copied!
Application name can contain alphanumerical characters and hyphens.
- Customizing application path
You can change the application path by specifying it with "--path" parameter in gg when building; each application must have its own unique path. Note that whatever it may be, the application name must always be its last path segment. For example, if your application name is "shopping", then the application path may be:
/api/v2/shopping
Copied!
An example of specifying the custom application path:
gg -q --path="/api/v2/shopping"
Copied!
Request path follows the application path, for instance:
https://some.web.site/shopping/buy-item
Copied!
In this case the application path is "/shopping" and the request path is "/buy-item". It means that file "buy-item.gliim" handles request "/buy-item" by implementing a begin-handler "/buy-item" in it. As another example, file "services/manage-home.gliim" (meaning "manage-home.gliim" file in subdirectory "services") handles request "/services/manage-home" etc.
The request path must match (fully or partially) the path of the file name that implements it, with source directory being the root ("/"). Here is an example of implementing a request "/buy-item" in file "buy-item.gliim":
begin-handler /buy-item public
get-param some_param
@Bought item: <<p-out some_param>>
end-handler
Copied!
As an example of a path hierarchy, such as for example a hierarchy of resources, methods etc, begin-handler may be:
begin-handler /items/wine-collection/red-wine/buy-item public
...
end-handler
Copied!
then the URL to call it would be:
https://some.web.site/shopping/items/wine-collection/red-wine/buy-item
Copied!
and might be implemented in file "items/wine-collection/red-wine/buy-item.gliim", meaning under subdirectory "items", then subdirectory "wine-collection", then subdirectory "red-wine", then file "buy-item.gliim".
- File/path naming conventions
By default, a request handler would be implemented in a source file whose path matches the request path, either fully or partially.
The simplest example is that "/buy-item" request must be implemented in file "buy-item.gliim".
As a more involved example, request handler for "/items/wine-collection/red-wine/buy-item" can be implemented in file "items.gliim" or file "items/wine-collection.gliim" or file "items/wine-collection/red-wine.gliim" or file "items/wine-collection/red-wine/buy-item.gliim".
Each of these source files can contain any number of matching requests. For instance, file "items.gliim" can contain request handlers for both "/items/wine-collection/red-wine/buy-item" and "/items/beer-collection/ipa-beer/buy-item"; while file "items/wine-collection.gliim" can contain request handlers for both "items/wine-collection/red-wine" and "items/wine-collection/white-wine".
By the same token, file "items/wine-collection/red-wine/buy-item.gliim" can implement both "/items/wine-collection/red-wine/buy-item" and "/items/wine-collection/red-wine/buy-item/sale" requests, as both requests match the file path.
Note that if you use "--single-file" option in gg, then each source ".gliim" file must contain only a single request, and its request path must match the file path fully. So in this case, request handler for "/items/wine-collection/red-wine/buy-item" must be in file "items/wine-collection/red-wine/buy-item.gliim", and no other request can be implemented in it.
The actual input parameters follow after the request path, and can be specified in a number of ways. A parameter value is generally URL encoded in any case.
- Path segments
A common way is to specify name and value separated by an equal sign within a single path segment:
https://some.web.site/shopping/buy-item/sku=4811/price=600/
Copied!
This way, you have a readable representation of parameter names and values, while still maintaining the hierarchical form which conveys how are the parameters structured.
Here, the required request path is "/buy-item" and there are two input parameters ("sku" and "price") with values of "4811" and "600" respectively.
- Query string
Parameters can be specified after a question mark in a "name=value" form. For example, the full URL (with the same parameter values as above) may be:
https://some.web.site/shopping/buy-item?sku=4811&price=600
Copied!
- Mixed
You can specify a mix of the above ways to write parameters, for instance the above URL can be written as:
https://some.web.site/shopping/buy-item/sku=4811?price=600
Copied!
- Parameters
A parameter name can be comprised of alphanumeric characters, hyphens and underscores, and it must start with an alphabet character. Any hyphens are converted to underscores for the purpose of obtaining parameter value, see get-param. Do not use double underscore ("__") in parameter names.
Structuring your parameters, i.e. the order in a query path or path segments, and which ones (if any) are in a query string, is determined by you. Regardless of your choices, the code that handles a request is the same. In the example used here, you can obtain the parameters in request handler source file "buy-item.gliim":
begin-handler /buy-item public
get-param sku
get-param price
run-query @mydb = "update wine_items set price='%s' where sku='%s'" : price, sku no-loop
@OKAY
end-handler
Copied!
For a hierarchical URL path, you would write the same:
begin-handler /items/wine-collection/red-wine/buy-item public
get-param sku
get-param price
run-query @mydb = "update wine_items set price='%s' where sku='%s'" : price, sku no-loop
end-handler
Copied!
Maximum length of a request URL is 2500 bytes.
How Gliimly handles requests
An incoming request is handled by a first available process:
- For a command-line program, there is only a single process, and it handles a single requests before it exits.
- For a service application, there can be any number of processes running. A process is chosen to service a request if it is currently not serving other requests; this way there are no processes waiting idle unnecessarily. Each process is identical and can serve any request, i.e. it has all request handlers available to it. Thus, when a process is chosen to serve a request, then this process will simply execute its begin-handler.
- Processing a request
To handle a request, a process first calls a Gliimly dispatcher, which is automatically generated. It uses a request name to call the appropriate request handler, as explained above.
You can implement two hooks into request handling: one that executes before each request (before-handler) and one that executes afterwards (after-handler).
At the end of the request, all request memory and all file handles allocated by Gliimly will be freed, except for process-scoped memory (see memory-handling).
- Performance
Gliimly uses a hash table to match a request with a handler function, as well to match parameters. Typically, it takes only a single lookup to find the handler function/parameters, regardless of the number of possible request names/parameters a process may serve (be it 10 or 10,000 different names). Gliimly pre-generates a hash table at compile time, so no run-time cycles are spent on creating it. Also, the hash table is created as a continuous block of memory in the program's data segment, which loads as a part of the program (as a single memory copy) and is very fast because accessing the data needs no pointer translations. As a result, Gliimly dispatcher is extremely fast.
- Unrecognized requests
If no request has been recognized (i.e. request name does not match any request-handling ".gliim" source file), then
Requests
request
See all
documentation
Resize array
Purpose: Resize array's hash table.
resize-array <array> hash-size <new size>
Copied!
resize-array will resize <array>'s hash table (created by new-array) to size <new size>, which refers to the number of "buckets", or possible array codes derived from keys stored.
When a number of elements stored grows, the search performance may decline if array size remains the same. Consequently, if the number of elements shrinks, the memory allocated by the array may be wasted. Use get-array to obtain its current hash-size, its length (the number of elements currently stored in it) and the statistics (such as average reads) to determine if you need to resize it.
Resizing is generally expensive, so it should not be done too often, and only when needed. The goal is to amortize this expense through future gain of lookup performance. For that reason it may be better to resize proportionally (i.e. by a percentage), unless you have a specific application reason to do otherwise, or to avoid exponential growth.
resize-array h hash-size 100000
Copied!
Array
get-array
new-array
purge-array
read-array
resize-array
write-array
See all
documentation
Rewind fifo
Purpose: Rewind FIFO list to the beginning.
rewind-fifo <list>
Copied!
rewind-fifo will position at the very first data element put into <list> which was created with new-fifo. Each time read-fifo is used, the internal position moves to the next element in the order they were put in. rewind-fifo rewinds back to the very first one.
See read-fifo.
FIFO
delete-fifo
new-fifo
purge-fifo
read-fifo
rewind-fifo
write-fifo
See all
documentation
Rewind lifo
Purpose: Rewind LIFO list.
rewind-lifo <list>
Copied!
rewind-lifo will position at the very last data element put into <list> which was created with new-lifo. Each time read-lifo is used, the internal position moves to the previous element in the reverse order they were put in.
See read-lifo.
LIFO
delete-lifo
new-lifo
purge-lifo
read-lifo
rewind-lifo
write-lifo
See all
documentation
Rollback transaction
Purpose: Rollbacks a SQL transaction.
rollback-transaction [ @<database> ] \
[ on-error-continue | on-error-exit ] \
[ error <error> ] [ error-text <error text> ] \
[ options <options> ]
Copied!
rollback-transaction will roll back a transaction started with begin-transaction.
<options> (in "options" clause) is any additional options to send to database you wish to supply for this functionality.
Once you start a transaction with begin-transaction, you must either commit it with commit-transaction or rollback with rollback-transaction. If you do neither, your transaction will be rolled back once the request has completed and your program will stop with an error message. This is because opening a transaction and leaving without committing or a rollback is a bug in your program.
You must use begin-transaction, commit-transaction and rollback-transaction instead of calling this functionality through run-query.
<database> is specified in "@" clause and is the name of the database-config-file. If ommited, your program must use exactly one database (see --db option in gg).
The error code is available in <error> variable in "error" clause - this code is always "0" if successful. The <error> code may or may not be a number but is always returned as a string value. In case of error, error text is available in "error-text" clause in <error text> string.
"on-error-continue" clause specifies that request processing will continue in case of an error, whereas "on-error-exit" clause specifies that it will exit. This setting overrides database-level db-error for this specific statement only. If you use "on-error-continue", be sure to check the error code.
Note that if database connection was lost, and could not be reestablished, the request will error out (see error-handling).
begin-transaction @mydb
run-query @mydb="insert into employee (name, dateOfHire) values ('Terry', now())"
run-query @mydb="insert into payroll (name, salary) values ('Terry', 100000)"
rollback-transaction @mydb
Copied!
Database
begin-transaction
commit-transaction
current-row
database-config-file
db-error
mariadb-database
postgresql-database
rollback-transaction
run-query
sqlite-database
See all
documentation
Run query
Purpose: Execute a query and loop through result set.
run-query \
[ @<database> ] \
= <query text> \
[ input <input parameter> [ , ... ] ] \
[ output ( <column name> [ noencode | urlencode | webencode ] ) [ , ... ] ] \
[ no-loop ] \
[ error <error> ] \
[ error-text <error text> ] \
[ affected-rows <affected rows> ] \
[ row-count <row count> ] \
[ on-error-continue | on-error-exit ]
<any code>
[ end-query ]
run-prepared-query \
... ( the same as run-query ) ...
Copied!
run-query executes a query specified with string <query text>.
<database> is specified in "@" clause and is the name of the database-config-file. If ommited, your program must use exactly one database (see --db option in gg).
- output clause
"output" clause is a comma-delimited list of the query's output columns. The column names do not need to match the actual query column names, rather you can name them anyway you want, as long as they positionally correspond. String variables with the same name are created for each column name and query's output assigned to them. For example:
run-query @db = "select firstName, lastName from employees" output first_name, last_name
@First name <<p-out first_name>>
@Last name <<p-out last_name>>
end-loop
Copied!
Note that the output is by default web-encoded. You can set the encoding of column output by using either "noencode" (for no encoding), "urlencode" (for URL-encoding) or "webencode" (for web-encoding) clause right after column name (see encode-web, encode-url for description of encodings). For example, here the first output column will not be encoded, and the second will be URL-encoded:
run-query @db = "select firstName, lastName from employees" output first_name noencode, last_name urlencode
@First name <<p-out first_name>>
@Last name <<p-out last_name>>
end-loop
Copied!
The query's input parameters (if any) are specified with '%s' in the <query text> (note that single quotes must be included). The actual input parameters are provided after "input" clause (you can instead use semicolon, i.e. ":"), in a comma-separated list. Each input variable is a string regardless of the actual column type, as the database engine will interpret the data according to its usage. Each input variable is trimmed (left and right) before used in a query.
"end-query" statement ends the loop in which query results are available through "output" clause. "no-loop" clause includes implicit "end-query", and in that case no "end-query" statement can be used. This is useful if you don't want to access any output columns (or there aren't any), but rather only affected rows (in INSERT or UPDATE for example), row count (in SELECT) or error code. "end-query" is also unnecessary for DDL statements like "CREATE INDEX" for instance.
"affected-rows" clause provides the number of <affected rows> (such as number of rows inserted by INSERT). The number of rows affected is typically used for DML operations such as INSERT, UPDATE or DELETE. For SELECT, it may or may not be the same as "row-count" which returns the number of rows from a query. See your database documentation for more.
The number of rows returned by a query can be obtained in <row count> in "row-count" clause.
The error code is available in <error> variable in "error" clause - this code is always "0" if successful. The <error> code may or may not be a number but is always returned as a string value. In case of error, error text is available in "error-text" clause in <error text> string.
"on-error-continue" clause specifies that request processing will continue in case of an error, whereas "on-error-exit" clause specifies that it will exit. This setting overrides database-level db-error for this specific statement only. If you use "on-error-continue", be sure to check the error code.
Note that if database connection was lost, and could not be reestablished, the request will error out (see error-handling).
"=" and "@" clauses may or may not have a space before the data that follows. So for example, these are both valid:
"@""="
run-query @db ="select firstName, lastName from employee where employeeId='%s'" output firstName, lastName input empid
"@""="
run-query @ db = "select firstName, lastName from employee where employeeId='%s'" output firstName, lastName input empid
Copied!
run-prepared-query is the same as run-query except that a <query> is prepared. That means it is pre-compiled and its execution plan is created once, instead of each time a query executes. The statement is cached going forward for the life of the process (with the rare exception of re-establishing a lost database connection). It means effectively an unlimited number of requests will be reusing the query statement, which generally implies higher performance. Note that databases do not allow prepared queries for DDL (Data Definition Language), as there is not much benefit in general, hence only DML queries (such as INSERT, DELETE etc.) and SELECT can be prepared.
In order for database to cache a query statement, Gliimly will save query text that actually executes the very first time it runs. Then, regardless of what query text you supply in the following executions, it will not mutate anymore. It means from that moment onward, the query will always execute that very same query text, just with different input parameters. In practicallity it means that <query> should be a string constant if you are using a prepared query (which is usually the case).
In some cases, you might not want to use prepared statements. Some reasons may be:
- your statements are often changing and dynamically constructed to the point where managing a great many equivalent prepared statements may be impractical - for example there may be a part of your query text that comes from outside your code,
- your dynamic statements do not execute as many times, which makes prepared statements slower, since they require two trips to the database server to begin with,
- your query cannot be written as a prepared statement due to database restrictions,
- in some cases prepared statements are slower because the execution plan depends on the actual data used, in which case non-prepared statement may be a better choice,
- in some cases the database support for prepared statements may still have issues compared to non-prepared,
- typically prepared statements do not use database query cache, so repeating identical queries with identical input data may be faster without them.
Note that in Postgres, with prepared statements you may get an error like "could not determine data type of parameter $N". This is an issue with Postgres server. In this case you can use "::<type>" qualifier, such as for instance to tell Postgres the input parameter is text:
select col1 from test where someId>='%s' and col1 like concat( '%s'::text ,'%')
Copied!
Note that SQL statements in SQLite are always prepared regardless of whether you use "run-query" or "run-prepared-query" due to how SQLite native interface works.
Select first and last name (output is firstName and lastName) based on employee ID (specified by input parameter empid):
get-param empid
run-query @db = "select firstName, lastName from employee where employeeId='%s'" output firstName, lastName input empid
@Employee is <<p-out firstName>> <<p-out lastName>>
end-query
Copied!
Prepared query without a loop and obtain error code and affected rows:
run-prepared-query @db = qry no-loop \
error ecode affected-rows arows input stock_name, stock_price, stock_price
Copied!
Database
begin-transaction
commit-transaction
current-row
database-config-file
db-error
mariadb-database
postgresql-database
rollback-transaction
run-query
sqlite-database
See all
documentation
Run remote
Purpose: Call a remote service in a single statement.
run-remote <service> \
( local <app name> ) | ( location <location> ) \
url-path <service URL> |
( \
app-path <app path> \
request-path <request path> \
[ url-params <url params> ] \
) \
[ request-body content <content> \
[ content-length <content length> ] \
[ content-type <content type> ] ] \
[ method <request method> ] \
[ environment <name>=<value> [ , ... ] ] \
[ timeout <timeout> ]\
[ status <status> ] \
[ started <started> ] \
[ finished-okay <finished okay> ]\
[ data <data> ] \
[ error <error> ] \
[ status <status> ] \
[ status-text <status text> ] \
[ handler-status <service status> ]
Copied!
run-remote is a combination of new-remote, call-remote and read-remote in one. Clauses for each of those can be specified in any order. Only a single <service> can be used. If a call to <service> succeeds, its results are read. Use either:
- <status> (in "status" clause) to check if there are results to be read: if it is GG_OKAY, then you can use the results.
- <finished okay> (in "finished-okay" clause) to check if service call executed: if it's 1, then it has.
See details for each clause in new-remote (for "local" through "timeout" clauses), call-remote (for "status" through "finished-okay" clauses) and read-remote (for "data" through "handler-status" clauses).
begin-handler /serv public
run-remote nf local "hash-server-yey" \
url-path "/hash-server-yey/server/op=add/key=sb_XYZ/data=sdb_123" \
finished-okay sfok \
data rdata error edata \
status st handler-status rstatus
if-true sfok not-equal 1 or st not-equal GG_OKAY
@Call did not succeed
else-if
@Result is <<p-out rdata> and (any) error is <<p-out edata>>
end-if
end-handler
Copied!
Distributed computing
call-remote
new-remote
read-remote
run-remote
See all
documentation
SELinux
If you do not use SELinux, you can ignore this.
SELinux is MAC (Mandatory Access Control) system, which means anything that isn't allowed is prohibited. This is as opposed to DAC, Discretionary Access Control, where everything is allowed except what's prohibited. MAC generally works on top of DAC, and they are expected to work in a complementary fashion. Gliimly deploys both methods for enhanced security.
Gliimly comes with a SELinux policy out-of-the-box, which covers its general functioning. However, you can write any code with Gliimly, and if you are using SELinux, you may run afoul of its other policies, which may not be conducive to your code. In that case, use temporarily a permissive mode (via setenforce), and then audit2allow to get a clue on what is the issue and then take action to allow what's requested.
Note that OpenSUSE package does not come with SELinux policy as of this release, because OpenSUSE at this time does not come with a default base policy and SELinux installation.
Gliimly policy files (including .te, .fc files, while .if file is empty) can be found here:
ls $(gg -l)/selinux/*.{te,fc}
Copied!
As a part of installing Gliimly, the following SELinux types will be installed:
- ggfile_t: all files within Gliimly directory (/var/lib/gg) are labeled with this type.
- gg_t: domain type (process type) of all Gliimly executables that communicate with other processes (be it Unix or TCP sockets). Only files labeled ggfile_t can run as this process type.
- ggport_t: port type that any Gliimly process is allowed to bind to, accept and listen. No other process types are allowed to do so.
Gliimly policy:
- allows Gliimly processes unconfined access. This is expected as Gliimly is a general purpose framework. It means you do not have to do anything to connect to database, use files, connect to other servers etc.
- allows web servers (httpd_t domain type) to connect to sockets labeled with ggfile_t, but does not allow any other access. This allows communication between reverse-proxy web servers and Gliimly applications.
- allows web servers to connect to any Gliimly process that is listening on a TCP port (see gg), but does not allow any other access (i.e. to any other ports).
Gliimly policy allows normal functioning of Gliimly features only, but does not introduce any unnecessary privileges to the rest of the system.
Note: Gliimly installation does not distribute .pp (compile) policy files, because it is not currently part of distro repos. Due to changes in SELinux and difference in versions installed across derived distros, Gliimly will compile source .te and .fc files during the installation, ensuring the best possibility of successful SELinux policy setup.
Using Unix domain sockets for Gliimly processes to communicate with a web server (see gg) is the default method and no further action is needed.
Using TCP sockets for Gliimly processes to communicate with a web server (see gg) requires you to label such ports as ggport_t, for example if you plan to use port 2109:
sudo semanage port -a -t ggport_t -p tcp 2109
Copied!
When you no longer need a port, for example if you are switching to another port (for instance 2209), remove the old one and add the new one:
sudo semanage port -d -t ggport_t -p tcp 2109
sudo semanage port -a -t ggport_t -p tcp 2209
Copied!
Changing or adding directories
If you are adding directories to be used by Gliimly program, or changing a directory, for example using a different storage instead of /var/lib/gg (see directories), you need to label files in new directories:
sudo semanage fcontext -a -t ggfile_t "/your/new/dir(/.*)?"
sudo restorecon -R /your/new/dir
Copied!
To remove context from such directories (if you are not using them anymore), use:
sudo semanage fcontext -d -t ggfile_t "/your/new/dir(/.*)?"
sudo restorecon -R /your/new/dir
Copied!
General
about-gliim
directories
SELinux
See all
documentation
SEMI
SEMI (SimplE Message Interface) is a binary format used to write (pack) and read (unpack) messages consisting of key/value pairs in the form of:
<key>=<8 byte length of value><value>
Copied!
<key> can be comprised of any characters other than equal sign("=") or an exclamation point ("!"), and any surrounding whitespaces are trimmed.
Value is always preceded by a 8-byte length of value (as a binary number in big-endian 64bit format), followed by the value itself, followed by the new line ("\n") at the end which is not counted in length. A special <key> is "comment", which is always ignored, and serves the purpose of general commenting.
SEMI implicitly supports binary data without the need for any kind of encoding, and the number of bytes is specified ahead of a value, making parsing efficient.
Messages
get-message
new-message
read-message
SEMI
write-message
See all
documentation
Send file
Purpose: Send file to client.
send-file <file> [ headers \
[ content-type <content type> ] \
[ download [ <download> ] ] \
[ etag [ <etag> ] ] \
[ file-name <file name> ] \
[ ( cache-control <cache control> ) | no-cache ] \
[ status-id <status id> ] \
[ status-text <status text> ] \
[ custom <header name>=<header value> [ , ... ] ]
]
Copied!
When a client requests download of a file, you can use send-file to provide <file>, which is its location on the server, and is either a full path or relative to the application home directory (see directories). Note however that you can never use dot-dot (i.e. "..") in <file> - this is a security measure to avoid path-traversal attacks. Thus the file name should never have ".." in it, and if it does, the program will error out.
Headers
The following are subclauses that allow setting any custom header:
- <content type> is content type (such as "text/html" or "image/jpg" etc.) If you are sending a file to a client for download and you don't know its content type, you can use "application/octet-stream" for a generic binary file.
- If "download" is used without boolean variable <download>, or if <download> evaluates to true, then the file is sent to a client for downloading - otherwise the default is to display file in client.
- <file name> is the name of the file being sent to a client. This is not the local file name - it is the file name that client will use for its own purposes.
- <cache control> is the cache control HTTP header. "no-cache" instructs the client not to cache. Only one of "cache-control" and "no-cache" can be used. An example of <cache control>:
send-file "somepic.jpg" headers cache-control "max-age: 3600"
Copied!
- If "etag" is used without boolean variable <etag>, or if <etag> evaluates to true, then "ETAG" header will be generated (a timestamp) and included, otherwise it is not. The time stamp is of last modification date of the file (and typically used to cache a file on client if it hasn't changed on the server). "etag" is useful to let the client know to download the file only once if it hasn't changed, thus saving network and computing resources. ETAG header is used only for send-file.
- <status id> and <status text> are status settings for the response, as strings (such as "425" for "status-id" and "Too early" for "status-text").
- To set any type of generic HTTP header, use "custom" subclause, where <header name> and <header value> represent the name and value of a single header. Multiple headers are separated by a comma. There is no limit on the maximum number of such headers, other than of the underyling HTTP protocol. You must not use "custom" to set headers already set elsewhere (such as "etag" for instance), as that may cause unpredictable behavior. For instance this sets two custom headers:
out-header use custom "CustomOption3"="CustomValue3", "Status"="418 I'm a teapot"
Copied!
"custom" subclause lets you use any custom headers that exist today or may be added in the future, as well as any headers of your own design.
Any cookies set prior to send-file (see set-cookie and delete-cookie) will be sent along with the file to the web client.
To send a document back to the browser and show it (i.e. display):
send-file "/home/gliim/files/myfile.jpg" headers content-type "image/jpg"
Copied!
An example to display a PDF document:
set-string pdf_doc="/home/mydir/myfile.pdf"
send-file pdf_doc headers content-type "application/pdf"
Copied!
If you want to send a file for download (with the dialog), use "download" clause. This way the document is not displayed but the "Save As" (or similar) window shows up, for example to download a "PDF" document:
send-file "/home/user/file.pdf" headers download content-type "application/pdf"
Copied!
Web
call-web
out-header
send-file
silent-header
See all
documentation
Server API
Gliimly can be used in extended-mode, where non-Gliimly code or libraries can be linked with your application.
Such code can be from a library (see --llflag and --cflag options in gg), or can be written directly as C code, i.e. files with .c and .h extension together with your Gliimly application. To do this, use call-extended statement.
Any function with C linkage can be used provided:
- its parameters are (by value or reference) only of type: "int64_t" (number type in Gliimly), "bool" (bool type in Gliimly) or "char *" (string type in Gliimly).
- it must not return any value (i.e. it must have a "void" return type).
When allocating strings in extended code, you must use Gliimly memory management functions. These functions are based on standard C library (such as malloc or free), but are not compatible with them because Gliimly manages such memory on top of the standard C library.
The functions you can use are:
- char *gg_strdup (char *s) which creates a copy of a null-terminated string "s". A pointer to memory data is returned.
- char *gg_strdupl (char *s, gg_num from, gg_num l) which creates a copy of memory data pointed to by "s", starting from byte "from" of length "l". Note that "from" is indexed from 0. A pointer to memory data is returned.
- void *gg_malloc(size_t size) which allocates memory of size "s" and returns a pointer to it.
- void *gg_calloc(size_t nmemb, size_t size) allocates "nmemb" blocks of memory (each of size "size") and returns a pointer to it. Memory is initialized to all zero bytes.
- num gg_mem_get_id (void *ptr) returns Gliimly memory handle for memory "ptr".
- void *gg_realloc(gg_num r, size_t size) reallocates memory identified with Gliimly memory handle "r" (see gg_mem_get_id()) to a new size of "size" and returns a pointer to it. Note that you can only reallocate the memory you created with gg_malloc() and gg_calloc() - do not attempt to reallocate Gliimly memory passed to your function. Gliimly in general never reallocates any existing memory in any statement.
- void gg_mem_set_len (gg_num r, gg_num len) sets the length of memory identified with Gliimly memory handle "r" to "len" bytes. Note that all Gliimly memory must have a null-byte at the end for consistency, regardless of whether such memory holds pure binary data or an actual null-delimited string. So for example, a string "abc" would have "len" set to 4 to include a null byte, and binary data "\xFF\x00\x01" (which consists of 3 bytes, the middle of which is a null byte) would have "len" also set to 4 and you would place an extra zero byte at the end of it even if it's not part of the actual useful data. Note that whatever memory length you set, it must be lesser or equal to the length of memory you have actually allocated.
- num gg_mem_get_len (gg_num r) returns the length of memory identified with Gliimly memory handle "r". The length returned is 1 bytes less than the memory set by gg_mem_set_len(), so for example for string "abc" the return value would be 3, as it would be for "\xFF\x00\x01".
You can use gg_malloc(), gg_calloc() and gg_realloc() to create new Gliimly-compatible memory - and assuming you have set the last byte of any such memory to a null byte, the resulting memory will be properly sized for Gliimly usage.
If you have memory that's already provided from elsewhere, you can use gg_strdup() or gg_strdupl() to create a copy of it that's compatible with Gliimly.
If Gliimly memory you created with these functions has extra unused bytes, you can use either gg_realloc() to reduce its footprint, or you can use gg_mem_set_len() to set its length.
Note that if you use C code included with a Gliimly project, you must include "gliim.h" file in each of them. You do not need to manually include any other ".h" files (header files), as they will be automatically picked up.
Place the following files in a separate directory for demonstration purposes.
In this example, "example.gliim" will use C functions from "example.c", and "example.h" will have declarations of those functions. File "example.c" implements a factorial function, as well as a function that will store the factorial result in an output message that's allocated and passed back to your Gliimly code:
#include "gliim.h"
void get_factorial(gg_num f, gg_num *res)
{
*res = 1;
gg_num i;
for (i = 2; i <= f; i++) {
*res *= i;
}
}
#define MEMSIZE 200
void fact_msg (gg_num i, char **res)
{
char *r = gg_malloc (MEMSIZE);
gg_num f;
get_factorial (i, &f);
gg_num bw = snprintf(r, MEMSIZE, "Factorial value (message from C function) is %ld", f) + 1;
*res = gg_realloc (gg_mem_get_id(r), bw);
}
Copied!
File "example.h" declares the above functions:
void get_factorial(gg_num f, gg_num *res);
void fact_msg (gg_num i, char **res);
Copied!
File "example.gliim" will call the above functions and display the results:
extended-mode
begin-handler /example public
set-number fact
call-extended get_factorial (10, &fact)
@Factorial is <<p-num fact>>
set-string res
call-extended fact_msg (10, &res)
p-out res
@
end-handler
Copied!
Create application "example":
sudo mgrg -i -u $(whoami) example
Copied!
Make the application:
gg -q
Copied!
Run it:
gg -r --req="/example" --exec --silent-header
Copied!
The output is, as expected:
Factorial is 3628800
Factorial value (message from C function) is 362880
Copied!
API
Client-API
Server-API
See all
documentation
Service
You can run a Gliimly application as a service by using mgrg program manager. Your application can then use commonly used web servers or load balancers (such as Apache, Nginx or HAProxy) so it becomes available on the web.
You can access your server application by means of:
- A web server (which is probably the most common way). You need to setup a reverse proxy, i.e. a web server that will forward requests and send replies back to clients; see below.
- The command line, in which case you can use gg (see -r option).
- Client-API, which allows any application in any programming language to access your server, as long as it has C linkage (by far most do). This method allows for MT (multithreaded) access to your application, where many client requests can be made in parallel.
Gliimly server runs as a number of (zero or more) background processes in parallel, processing requests simultaneously.
Setting up reverse proxy (web server)
To access your application via a reverse proxy (i.e. web server), generally you need to add a proxy directive and restart the web server.
If you use Apache, you need to connect it to your application, see connect-apache-tcp-socket (for using TCP sockets) and connect-apache-unix-socket (for using Unix sockets). If you use Nginx, you need to connect it to your application, see connect-nginx-tcp-socket (for using TCP sockets) and connect-nginx-unix-socket (for using Unix sockets). For HAProxy, see connect-haproxy-tcp-socket. Virtually all web servers/proxies support FastCGI protocol used by Gliimly; please see your server's documentation.
Starting Gliimly server processes
Use mgrg, for example:
mgrg <app name>
Copied!
which in general will (based on the request load) start zero or more background resident process(es) (daemons) that process requests in parallel, or for instance:
mgrg -w 20 <app name>
Copied!
which will start 20 processes.
In a heavy-load environment, a client's connection may be rejected by the server. This may happen if the client runs very slowly due to swapping perhaps. Once a client establishes a connection, it has up to 5 seconds by default to send data; if it doesn't, the server will close the connection. Typically, clients send data right away, but due to a heavy load, this time may be longer. To set the connection timeout in milliseconds, set the following variable before starting the application server, for instance:
export "LIBFCGI_IS_AF_UNIX_KEEPER_POLL_TIMEOUT"="8000"
mgrg -w 1 <app name>
Copied!
In this case, the timeout is set to 8 seconds.
Running application
application-setup
CGI
command-line
service
See all
documentation
Set bool
Purpose: Set value of a boolean variable.
set-bool <var> [ = <boolean> ] [ process-scope ]
Copied!
Boolean variable <var> is either assigned value <boolean> with "=" clause, or it is assigned "false" if equal clause ("=") is omitted.
If "process-scope" clause is used, then boolean is of process scope, meaning its value will persist from one request to another for the life of the process.
Assign "true" value to boolean variable "my_bool":
set-bool my_bool = true
Copied!
Program flow
break-loop
code-blocks
continue-loop
do-once
exit-handler
if-defined
if-true
set-bool
start-loop
See all
documentation
Set cookie
Purpose: Set cookie.
set-cookie ( <cookie name>=<cookie value> \
[ expires <expiration> ] \
[ path <path> ] \
[ same-site "Lax"|"Strict"|"None" ] \
[ no-http-only [ <no-http-only> ] ] \
[ secure [ <secure> ] ] ) ,...
Copied!
To set a cookie named by string <cookie name> to string value <cookie value>, use set-cookie statement. A cookie must be set prior to outputting any actual response (such as with output-statement or p-out for example), or the program will error out and stop.
Cookie's <expiration> date (as a a string, see get-time) is given with "expires" clause. The default is session cookie meaning the cookie expires when client session closes.
Cookie's <path> is specified with "path" clause. The default is the URL path of the request URL.
Whether a cookie applies to the same site is given with "same-site" clause along with possible values of "Lax", "Strict" or "None".
By default a cookie is not accessible to client scripting (i.e. "HttpOnly") -you can change this with "no-http-only" clause. That will be the case if "no-http-only" clause is used without bool expression <no-http-only>, or if <no-http-only> evaluates to true.
Use "secure" if a secure connection (https) is used, in order to specify this cookie is available only with a secure connection. That will be the case if "secure" is used without bool expression <secure>, or if <secure> evaluates to true.
Cookies are commonly used for session maintenance, tracking and other purposes. Use get-cookie and delete-cookie together with set-cookie to manage cookies.
You can set multiple cookies separated by a comma:
get-time to tm year 1
set-cookie "mycookie1"="4900" expires tm path "/", "mycookie2"="900" expires tm path "/my-app" same-site "Strict"
Copied!
To set a cookie named "my_cookie_name" to value "XYZ", that will go with the reply and expire in 1 year and 2 months from now, use:
get-time to mytime year 1 month 2
set-string my_cookie_value="XYZ"
set-cookie "my_cookie_name"=my_cookie_value expires mytime path "/" same-site "Lax"
Copied!
A cookie that can be used by JavaScript (meaning we use no-http-only clause):
set-cookie "my_cookie_name"=my_cookie_value no-http-only
Copied!
Cookies
delete-cookie
get-cookie
set-cookie
See all
documentation
Set number
Purpose: Set value of a number variable.
set-number <var> [ = <number> ] [ process-scope ]
Copied!
Number variable <var> is either assigned value <number> with "=" clause, or it is assigned 0 if equal clause ("=") is omitted.
If "process-scope" clause is used, then number is of process scope, meaning its value will persist from one request to another for the life of the process.
Initialize number "my_num" to 0 and the value of this variable, however it changes, will persist through any number of requests in the same process:
set-number my_num process-scope
Copied!
Initialize number "my_num" to 10:
set-number my_num = 10
Copied!
Subtract 5:
set-number my_num = my_num-5
Copied!
Assign an expression:
set-number my_num = (some_num*3+1)%5
Copied!
Numbers
number-expressions
number-string
set-number
string-number
See all
documentation
Set param
Purpose: Set or create a parameter.
set-param ( <name> [ = <value> ] ) , ...
Copied!
set-param sets or creates parameter <name> (see get-param).
If parameter <name> does not exist, it's created with <value>. If it does exist, its value is replaced with <value>. Note that <value> can be of any type.
If equal sign ("=") and <value> are omitted, then <value> is the same as <name>, so:
set-param something
Copied!
is the same as:
set-param something = something
Copied!
where the first "something" is the parameter set/created, and the second "something" is an actual variable in your code. In this example, the two just happen to have the same name; this generally happens often, so this form is a shortcut for that.
You can specify any number of parameters separated by a comma, for instance in this case par1 is a boolean, par2 is a number and par3 is a string:
set-number par2 = 10
set-param par1=true, par, par3="hi"
Copied!
Set the value of parameter "quantity" to "10", which is also the output:
set-param quantity = "10"
...
get-param quantity
p-out quantity
Copied!
Request data
get-param
request-body
set-param
See all
documentation
Set string
Purpose: Set value of a string variable.
set-string <variable> [ = <string> ] [ process-scope ] [ unquoted ]
Copied!
String variable <variable> will be assigned a value of <string> if clause "=" is present; otherwise <variable> is assigned an empty string.
If "process-scope" clause is used, then <variable> will be of process scope, meaning its value will persist from one request to another for the life of the process; this clause can only be used if <variable> did not already exist.
If "unquoted" clause is used, then <string> literal is unquoted, and everything from equal clause ("=") to the rest of the line is a <string>; in this case there is no need to escape double quotes or backslashes. Note that in this case, "unquoted" and any other clause must appear prior to equal clause ("=") and after variable, because they wouldn't otherwise be recognized. For instance:
set-string my_string unquoted = this is "some" string where there escape characters like \n do "not work"
Copied!
This is the same as:
set-string my_string = "this is \"some\" string where there escape characters like \\n do \"not work\""
Copied!
"unquoted" clause is useful when writing string literals that would otherwise need lots of escaping.
Initialize "my_string" variable to "":
set-string my_string
Copied!
Initialize "my_string" variable to "abc":
set-string my_string = "abc"
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Silent header
Purpose: Do not output HTTP headers.
silent-header
Copied!
silent-header will suppress the output of HTTP headers, such as with out-header, or in any other case where headers are output. The effect applies to current request only; if you use it conditionally, then you can have it on or off dynamically.
If you want to suppress the headers for all service handlers (as if silent-header were implied at the beginning of each), then for a command-line program, use "--silent-header" option in "gg -r" when running it; to suppress the headers in services, use "-z" option in mgrg.
silent-header must be used prior to outputting headers, meaning either prior to any output (if out-header is not used) or prior to first out-header.
There are many uses for silent-header, among them:
- A command-line program (such as a command line program) may use it to produce generic output, without any headers,
- the output from a program may be redirected to a web file (such as html), in case of dynamic content that rarely changes,
- a web program may output a completely different (non-HTTP) set of headers, etc.
silent-header
Copied!
Web
call-web
out-header
send-file
silent-header
See all
documentation
Split string
Purpose: Split a string into pieces based on a delimiter.
split-string <string> with <delimiter> to <result>
split-string delete <result>
Copied!
split-string will find all instances of string <delimiter> in <string> and then split it into pieces delimited by string <delimiter>. The <result> can be used with read-split to obtain the pieces and with split-string to delete it (use "delete" clause with <result> to delete it).
All pieces produced will be trimmed both on left and right. If a piece is double quoted, then double quotes are removed. For instance save this code in "ps.gliim" in a separate directory:
%% /parse public
set-string clist = "a , b, \"c , d\" , e"
split-string clist with "," to res count tot
start-loop repeat tot use i
read-split i from res to item status st
if-true st not-equal GG_OKAY
break-loop
end-if
pf-out " [%s]", item
end-loop
%%
Copied!
Create the application, build and run it:
sudo mgrg -i -u $(whoami) ps
gg -q
gg -r --req="/parse" --exec --silent-header
Copied!
The output would be:
[a] [b] [c , d] [e]
Copied!
split-string is useful for parsing CSV (Comma Separated Values) or any other kind of separated values, where separator can be any string of any length, for example if you're parsing an encoded URL-string, then "&" may be a separator, as in the example below.
The following will parse a string containing name/value pairs (such as "name=value") separated by string "&":
%% /parse-url public
"&"
set-string url ="x=23&y=good&z=hello_world"
"amp;"
split-string url with "&" to url_var count tot
start-loop repeat tot use i
"&"
read-split i from url_var to item
"="
split-string item with "=" to item_var
read-split 1 from item_var to name
read-split 2 from item_var to val
pf-out "Variable %s has value %s\n", name, val
end-loop
%%
Copied!
The result is:
Variable x has value 23
Variable y has value good
Variable z has value hello_world
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Sqlite database
SQLite database configuration file should contain a single line of text, and it must be the full path to the SQLite database file used, for example (if you keep it in Gliimly's application directory):
/var/lib/gg/<app name>/app/<db name>.db
Copied!
Database
begin-transaction
commit-transaction
current-row
database-config-file
db-error
mariadb-database
postgresql-database
rollback-transaction
run-query
sqlite-database
See all
documentation
Start loop
Purpose: Loop execution based on a condition.
start-loop [ repeat <repeat> ] \
[ use <loop counter> \
[ start-with <start with> ] [ add <add> ] ]
<any code>
end-loop
Copied!
start-loop will execute code between start-loop and "end-loop" clauses certain number of times based on a condition specified and the usage of continue-loop and break-loop, which can be used in-between the two.
<repeat> number (in <repeat> clause) specifies how many times to execute the loop (barring use of continue-loop and break-loop).
<loop counter> (in "use" clause) is a number that by default starts with value of 1, and is incremented by 1 each time execution loops back to start-loop, unless "start-with" and/or "add" clauses are used.
If <start with> (in "start-with" clause) is used, that's the initial value for <loop counter> (instead of the default 1), and if <add> is specified (in "add" clause), then <loop counter> is incremented by <add> each time execution loops back to start-loop (instead of the default 1).
If either of "start-with" or "add" clauses is used, then "use" must be specified.
Print numbers 0 through 19:
start-loop repeat 20 use p start-with 0
p-num p
@
end-loop
Copied!
A loop that is controlled via continue-loop and break-loop statements, displaying numbers from 1 through 30 but omitting those divisible with 3:
set-number n
set-number max = 30
start-loop
set-number n add 1
if-true n mod 3
continue-loop
end-if
if-true n greater-than max
break-loop
end-if
p-num n
end-loop
Copied!
Program flow
break-loop
code-blocks
continue-loop
do-once
exit-handler
if-defined
if-true
set-bool
start-loop
See all
documentation
Statements
Gliimly statements generally have three components separated by space(s):
- a name,
- an object,
- clauses.
A statement starts with a name, which designates its main purpose.
An object denotes what is referenced by a statement.
Each clause that follows consist of a clause name followed either with no arguments, or with one or more arguments. A clause may have subclauses immediately afterwards, which follow the same structure. Most clauses are separated by space(s), however some (like "=" or "@") may not need space(s) before any data; the statement's documentation would clearly specify this.
An object must immediately follow the statement's name, while clauses may be specified in any order.
For example, in the following Gliimly code:
encrypt-data orig_data input-length 6 password "mypass" salt newsalt to res binary
Copied!
encrypt-data is the statement's name, and "orig_data" is its object. The clauses are:
- input-length 6
- password "mypass"
- salt newsalt
- to res
- binary
The clauses can be in any order, so the above can be restated as:
encrypt-data orig_data to res password "mypass" salt newsalt binary input-length 6
Copied!
Gliimly documentation provides a concise BNF-like notation of how each statement works, which in case of encrypt-data is (backslash simply allows continuing to multiple lines, while two backslashes add a new line in between):
encrypt-data <data> to <result> \
[ input-length <input length> ] \
[ binary [ <binary> ] ] \
( password <password> \
[ salt <salt> [ salt-length <salt length> ] ] \
[ iterations <iterations> ] \
[ cipher <cipher algorithm> ] \
[ digest <digest algorithm> ]
[ cache ]
[ clear-cache <clear cache> ) \
[ init-vector <init vector> ]
Copied!
Note the color scheme: clauses with input data are in blue, and with output data in green.
Optional clauses are enclosed with angle brackets (i.e between "[" and "]").
Arguments (in general variables and constants) are stated between "<" and ">".
If only one of a number of clauses may appear, such clauses are separated by "|".
A group of clauses that cannot be separated, or to remove ambiguity, are enclosed with "(" and ")".
Keywords (other than statement names such as encrypt-data above) are generally specific to each statement. So, keyword "salt", for example, has meaning only within encrypt-data and a few other related statements. In order to have the freedom to choose your variable names, you can simply surround them in parenthesis (i.e. "(" and ")") and use any names you want, even keywords, for example:
set-string password = "some password"
set-string salt = "0123456789012345"
encrypt-data "some data" password (password) salt (salt) to enc_data
p-out enc_data
Copied!
In this example, keywords "password" and "salt" are used as variable names as well.
Note that while you can use tab characters at the beginning of the line (such as for indentation), as well as in string literals, do not use tabs in Gliimly statements otherwise as they are not supported for lack of readability - use plain spaces.
Splitting statement into multiple lines, space trimming
To split a statement into multiple lines (including string continuations), use a backslash (\), for instance:
encrypt-data orig_data input-length 6 \
password "my\
pass" salt \
newsalt to res binary
Copied!
Note that all statements are always left-trimmed for whitespace. Thus the resulting string literal in the above example is "mypass", and not "my pass", as the whitespaces prior to line starting with "pass" are trimmed first. Also, all statements are right-trimmed for white space, except if backslash is used at the end, in which case any spaces prior to backslash are conserved. For that reason, in the above example there is a space prior to a backslash where clauses need to be separated.
Note that begin-handler statement cannot be split with backsplash, i.e. it must always be on a single line for readability.
Comments
You can use both C style (i.e. /* ... */) and C++ style (i.e. //) comments with Gliimly statements, including within statements (with the exception of /*..*/ before statement name for readability), for example:
run-query @db = \
"select firstName, lastName from employee where yearOfHire>='%s'" \
output firstName, lastName : "2015"
Copied!
A statement that fails for reasons that are generally irrecoverable will error out, for example out of memory or disk space, bad input parameters etc.
Gliimly philosophy is to minimize the need to check for such conditions by preventing the program from continuing. This is preferable, as forgetting to check usually results in unforeseen bugs and safety issues, and the program should have stopped anyway.
Errors that are correctable programmatically are reported and you can check them, for example when opening a file that may or may not exist.
Overall, the goal is to stop execution when necessary and to offer the ability to handle an issue when warranted, in order to increase run-time safety and provide instant clues about conditions that must be corrected.
Language
inline-code
statements
syntax-highlighting
unused-var
variable-scope
See all
documentation
Stat file
Purpose: Get information about a file.
stat-file <file> \
size | type | path | name \
to <variable>
Copied!
stat-file obtains information about <file>, which is either the full path of a file or directory, or a name relative to the application home directory (see directories).
Clause "size" will store file's size in bytes to number <variable>, or it will be GG_ERR_FAILED (if operation failed, likely because file does not exist or you have no permissions to access it).
Clause "type" will store file's type to number <variable>, and it can be either GG_FILE (if it's a file) or GG_DIR (if it's a directory) or GG_ERR_FAILED (if operation failed, likely because file does not exist or you have no permissions to access it).
Clause "path" (in string <variable>) obtains the fully resolved path of the <file> (including symbolic links), and "name" is the name (a basename, without the path). If path cannot be resolved, then <variable> is an empty string.
To get file size in variable "sz", which is created here:
stat-file "/home/user/file" size to sz
Copied!
To determine if the object is a file or a directory:
stat-file "/home/user/some_name" type to what
if-true what equal GG_FILE
@It's a file!
else-if what equal GG_DIR
@It's a directory!
else-if
@Doesn't exist!
end-if
Copied!
Get the fully resolved path of a file to string variable "fp", which is created here.
stat-file "../file" path to fp
Copied!
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
String length
Purpose: Get string length.
string-length <string> to <length>
Copied!
string-length will place the number of bytes in <string> into number <length>.
Note that <string> does not need to be null-terminated, meaning it can be a binary or text string. <length> is the number of bytes comprising any such string.
Variable "len" will be 6:
set-string str = "string"
string-length str to len
Copied!
Variable "len2" will be 18 - the string has a null character in the middle of it:
set-string str2 = "string" "\x00 after null"
string-length str2 to len2
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
String number
Purpose: Convert string to number.
string-number <string> [ to <number> ] \
[ base <base> ] \
[ status <status> ]
Copied!
<string> is converted to <number> in "to" clause, using <base> in "base" clause, where <base> is by default either 10, or 16 (if number is prefixed with "0x" or "0X", excluding any leading minus or plus sign) or 8 (if number is prefixed with "0", excluding any leading minus or plus sign).
<base> can be between 2 and 36, inclusive. <number> can be positive or negative (i.e. signed) and can be up to 64-bit in length. If <base> is 0, it is the same as if it is not specified, i.e. default behavior applies.
<status> number (in "status" clause) is GG_OKAY if conversion was successful. If it wasn't successful, <number> is 0 and <status> is GG_ERR_OVERFLOW if <string> represents a number that requires over 64 bits of storage, GG_ERR_INVALID if <base> is incorrect, GG_ERR_EXIST if <string> is empty or no digits specified.
If there are trailing invalid characters (for instance "182xy" for base 10), <number> is the result of conversion up to the first invalid character and <status> is GG_ERR_TOO_MANY. In this example, <number> would be 182.
In this example, number "n" would be 49 and status "st" would be GG_OKAY:
string-number "49" to n base 10 status st
Copied!
Numbers
number-expressions
number-string
set-number
string-number
See all
documentation
Syntax highlighting
For syntax highlighting of Gliimly programs in Vim, do this once:
gg -m
Copied!
The above will create a syntax file in your local Vim syntax directory:
$HOME/.vim/syntax/gliim.vim
Copied!
and also update your local $HOME/.vimrc file to use this syntax for files with .gliim extension. All files updated are local, i.e. they affect only the current user. Each user who wants this feature must issue the above command.
You can then change the color scheme to anything you like by using ":colorscheme" directly in editor, or by specifying "colorscheme" in your ".vimrc" file for a persistent change.
The Gliimly highlighting syntax is tested with Vim 8.1.
Language
inline-code
statements
syntax-highlighting
unused-var
variable-scope
See all
documentation
Temporary file
To create a temporary file, use uniq-file with a "temporary" clause. Temporary files are the same as any other files in the file-storage (and are organized in the same fashion), except that they are all under the subdirectory named "t":
/var/lib/gg/<app_name>/app/file/t
Copied!
A temporary file is not automatically deleted - you can remove it with delete-file statement when not needed (or use a periodic shell script to remove old temporary files). The reason for this is that the nature of temporary files varies, and they may not necessarily span a given time frame (such as a lifetime of a request, or a lifetime of a process that serves any number of such requests), and they may be used across number of requests for a specific purpose. Thus, it is your responsibility to remove a temporary file when it's appropriate for your application to do so.
The reason for storing temporary files in a separate directory is to gain a separation of temporary files (which likely at some point can be freely deleted) from other files.
See uniq-file for an example of creating a temporary file.
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Text utf8
Purpose: Convert text to UTF8 string.
text-utf8 <text> \
[ status <status> ] \
[ error-text <error text> ]
Copied!
text-utf8 will convert string value <text> to UTF8. <text> itself will hold the resulting UTF8 string. If you don't wish <text> to be modified, make a copy of it first (see copy-string). See utf8-text for the reverse conversion and data standards information.
You can obtain <status> in "status" clause. <status> number is GG_OKAY if successful, or GG_ERR_UTF8 if there was an error, in which case <error text> string in "error-text" clause will contain the error message.
set-string txt = "\u0459\\\"Doc\\\"\\n\\t\\b\\f\\r\\t\\u21d7\\u21d8\\t\\u25b7\\u25ee\\uD834\\uDD1E\\u13eb\\u2ca0\\u0448\\n\\/\\\"()\\t"
text-utf8 txt status txt_status error-text txt_error
set-string utf8 = "љ\"Doc\"\n\t\b\f\r\t⇗⇘\t▷◮𝄞ᏫⲠш\n/\"()\t"
if-true utf8 not-equal txt or txt_status not-equal GG_OKAY or txt_error not-equal ""
@Error in converting string to UTF8
end-if
Copied!
UTF8
text-utf8
utf8-text
See all
documentation
Trace run
Purpose: Emit trace.
trace-run [ <format>, <variable> [ , ... ] ]
Copied!
trace-run formats a tracing message according to the <format> string and a list of <variable>s and then writes the result into current process' trace file.
trace-run can be used without any clauses, in which case a location (file name and line number) is recorded in the trace file - this is useful when you only want to know if the execution passed through your code.
If trace-run has any other clauses, then <format> string must be present and there must be at least one <variable> (it means if you want to trace a simple string literal you still have to use "%s" as format).
For tracing to have effect, debugging and tracing must be enabled (see "--debug" and "--trace" options in gg). For location of trace files, see directories.
<format> string must be a literal. Variables must follow <format> separated by commas in the same order as placeholders. If you use any placeholders other than specified below, or the type of variables you use do not match the type of a correspoding placeholder in <format>, your program will error out. You can use the following placeholders in <format> (see trace-run for an example of usage):
- %s for a string
- %<number>s for a string output with a width of at least <number> (any excess filled with spaces to the left),
- %ld for a number
- %<number>ld for a number output with a width of at least <number> (any excess filled with spaces to the left)
Here's an example of using the format placeholders:
%% /my-request public
@Hi
trace-run "%s it's %ld degrees outside, or with minimum width: %20s it's %20ld outside", "yes", 90, "yes", 90
%%
Copied!
Create and make the application:
sudo mgrg -i -u $(whoami) test
gg --trace --debug
Copied!
Run it:
gg -r --req="/my-request" --exec
Copied!
The output is:
Hi
Copied!
And to find the location of trace file:
gg -t 1
Copied!
The line in the trace file output by your trace-run would be similar to:
2024-08-01-17-13-55 (my_request.gliim:4)| my_request yes it's 90 degrees outside, or with minimum width: yes it's 90 outside
Copied!
trace-run "Program wrote %ld bytes into file %s", num_bytes, file_name
trace-run
Copied!
Debugging
debugging
trace-run
See all
documentation
Trim string
Purpose: Trim a string.
trim-string <string>
Copied!
trim-string trims <string>, both on left and right.
The variable "str" will be "some string" after trim-string:
set-string str = " some string ";
trim-string str
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Uninstall
Run from command line:
sudo make uninstall
Copied!
- Application files
Note that /var/lib/gg directory is not removed, as it generally contains application files. You may move such files or delete them as you see fit.
Download and build
install
install-arch
install-debian
install-fedora
install-opensuse
uninstall
See all
documentation
Uniq file
Purpose: Create a new empty file with a unique name.
uniq-file <file name> [ temporary ]
Copied!
One of the common tasks in many applications is creating a unique file (of any kind, including temporary). uniq-file statement does that - it creates a new unique file of zero size, with <file name> being its fully qualified name, which is always within the file-storage.
The file itself is created empty. If "temporary" clause is used, then the file created is a temporary-file.
The file has no extension. You can rename it after it has been created to reflect its usage or purpose.
All files created are setup with owner and group read/write only permissions.
The following creates an empty file with auto-generated name that will be stored in "mydoc" variable. String variable "mydoc" is defined in the statement. The string "some data" is written to a newly created file:
uniq-file mydoc
write-file mydoc from "some data"
Copied!
To create a temporary file:
uniq-file temp_file temporary
...
"temp_file"
..
delete-file temp_file
Copied!
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Unlock file
Purpose: Unlocks file.
unlock-file id <lock id>
Copied!
unlock-file will unlock file that was locked with lock-file. <lock id> is the value obtained in lock-file's "id" clause.
See lock-file.
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Unused var
Purpose: Prevent compiler error if variable is not used.
unused-var <variable name>
Copied!
unused-var prevents erroring out if <variable name> is unused. Generally, you don't want to have unused variables - they typically indicate bugs or clutter. However, in some cases you might need such variables as a reminder for a future enhancement, or for some other reason it is unavoidable. In any case, you can use unused-var to shield such instances from causing errors.
In the following, variable "hw" is created and initialized. Such variable is not used at the moment, however if you would do so in the future and want to keep it, use unused-var to prevent compiler errors:
set-string hw = "Hello world"
unused-var hw
Copied!
Language
inline-code
statements
syntax-highlighting
unused-var
variable-scope
See all
documentation
Upper string
Purpose: Upper-case a string.
upper-string <string>
Copied!
upper-string converts all <string>'s characters to upper case.
The resulting "str" is "GOOD":
set-string str="good"
upper-string str
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Use cursor
Purpose: Iterate to a lesser or greater key in an index.
use-cursor <cursor> ( current | get-lesser | get-greater ) \
[ key <key> ] \
[ value <value> ] \
[ update-value <update value> ] \
[ status <status> ]
Copied!
use-cursor uses <cursor> previously created (see read-index, write-index) for iteration over index nodes with lesser or greater key values. It can also obtain keys and values for such nodes, as well as update their values.
A <cursor> has a current node, which is first computed by using "current", "get-lesser" or "get-greater" clauses, and then any other clauses are applied to it (such as "key", "value" and "update-value").
The computation of a current node is performed by using a <cursor>'s "previous current node", i.e. the current node just before use-cursor executes. If "current" clause is used, the current node remains the same as previous current node. If "get-lesser" clause is used, a node with a key that is the next lesser from the previous current will become the new current. If "get-greater" clause is used, a node with a key that is the next greater from the previous current will become the new current.
If the new current node can be found, then other use-cursor clauses are applied to it, such as to obtain a <key> or <value>, or to <update value>. If, as a result of either of these clauses, the new current node cannot be found (for instance there is no lesser or greater key in the index), the current node will be unchanged and <status> (in "status" clause) will be GG_ERR_EXIST.
Key, value, updating value, status
"key" clause will obtain the key in a current node into <key> string. The value of current node can be obtained in <value> in "value" clause; <value> is a string. The value of current node can be updated to <update value> in "update-value" clause; <update value> is a string. This update is performed after <value> has been retrieved, allowing you to obtain the previous value in the same statement.
"status" clause can be used to obtain <status> number, which is GG_ERR_EXIST if the new current node cannot be found, in which case the current node, <key> and <value> are unchanged. Otherwise, <status> is GG_OKAY.
The following will find a value with key "999", and then iterate in the index to find all lesser values (in descending order):
set-string k = "999"
read-index myindex equal k status st \
value val new-cursor cur
start-loop
if-true st equal GG_OKAY
@Value found is [<<p-out val>>] for key [<<p-out k>>]
use-cursor cur get-lesser status st value val key k
else-if
break-loop
end-if
end-loop
Copied!
For more examples, see new-index.
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
Utf8 text
Purpose: Convert UTF8 string to text.
utf8-text <utf8> \
[ to <text> ] \
[ length <length> ] \
[ status <status> ] \
[ error-text <error text> ]
Copied!
utf8-text will convert <utf8> text to <text> (specified with "to" clause). If <text> is omitted, then the result of conversion is output.
<utf8> is a string that may contain UTF8 characters (as 2, 3 or 4 bytes representing a unicode character). Encoding creates a string that can be used as a value where text representation of UTF8 is required. utf8-text is performed according to RFC7159 and RFC3629 (UTF8 standard).
Note that hexadecimal characters used for Unicode (such as \u21d7) are always lowercase. Solidus character ("/") is not escaped, although text-utf8 will correctly process it if the input has it escaped.
The number of bytes in <utf8> to be converted can be specified with <length> in "length" clause. If <length> is not specified, it is the length of string <utf8>. Note that a single UTF-8 character can be anywhere between 1 to 4 bytes. For example "љ" is 2 bytes in length.
The status of encoding can be obtained in number <status>. <status> is the string length of the result in <text> (or the number of bytes output if <text> is omitted), or -1 if error occurred (meaning <utf8> is an invalid UTF8) in which case <text> (if specified) is an empty string and the error text can be obtained in <error text> in "error-text" clause.
Convert UTF8 string to text and verify the expected result:
set-string utf8_str = "\"Doc\"\n\t\b\f\r\t⇗⇘\t▷◮𝄞ᏫⲠш\n/\"()\t"
utf8-text utf8_str status encstatus to text_text
(( expected_result
@\"Doc\"\n\t\b\f\r\t\u21d7\u21d8\t\u25b7\u25ee\ud834\udd1e\u13eb\u2ca0\u0448\n/\"()\t
))
if-true text_text equal expected_result and encstatus not-equal -1
@decode-text worked okay
end-if
Copied!
UTF8
text-utf8
utf8-text
See all
documentation
Variable scope
Gliimly uses scope rules for variables that is similar to other programming languages.
Language
inline-code
statements
syntax-highlighting
unused-var
variable-scope
See all
documentation
Write array
Purpose: Store key/value pair into an array.
write-array <array> \
key <key> \
value <value> \
[ status <status> ] \
[ old-value <old value> ]
Copied!
write-array will store string <key> (in "key" clause) and <value> (in "value" clause) into array <array>, which must be created with new-array.
<key> and <value> are collectively called an "element".
If <key> already exists in the array table, then the old value associated with it is returned in string <old value> (in "old-value" clause) and <value> will replace the old value - in this case <status> number (in "status" clause) has a value of GG_INFO_EXIST.
If <key> did not exist, <status> will be GG_OKAY and <old value> is unchanged.
If an <array> was created with "process-scope" clause (see new-array), then the element (including <key> and <value>) will not be freed when the current request ends, rather it will persist while the process runs, unless deleted (see read-array with delete clause).
Writing data to array:
new-array h
write-array h key "mykey" value "some data"
Copied!
Writing new value with the same key and obtaining the previous value (which is "some data"):
write-array h key "mykey" value "new data" status st old-value od
if-true st equal GG_INFO_EXIST
@Previous value for this key is <<p-out od>>
end-if
Copied!
The following is an array key/value service, where a process-scoped array is created. It provides inserting, deleting and querying key/value pairs. Such a service process can run indefinitely:
%% /arraysrv public
do-once
new-array h hash-size 1024 process-scope
end-do-once
get-param op
get-param key
get-param data
if-true op equal "add"
write-array h key key value data old-value old_data status st
if-true st equal GG_INFO_EXIST
delete-string old_data
end-if
@Added [<<p-out key>>]
else-if op equal "delete"
read-array h key (key) value val delete status st
if-true st equal GG_ERR_EXIST
@Not found [<<p-out key>>]
else-if
@Deleted [<<p-out val>>]
delete-string val
end-if
else-if op equal "query"
read-array h key (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found, queried [<<p-out key>>]
else-if
@Value [<<p-out val>>]
end-if
end-if
%%
Copied!
Create and make the application, then run it as service:
// Create application
sudo mgrg -i -u $(whoami) arr
// Make application
gg -q
// Start application (single process key service)
mgrg -w 1 arr
Copied!
Try it from a command line client (see gg):
// Add data
gg -r --req="/arraysrv/op=add/key=15/data=15" --service --app="/arr" --exec
// Query data
gg -r --req="/arraysrv/op=query/key=15" --service --app="/arr" --exec
// Delete data
gg -r --req="/arraysrv/op=delete/key=15" --service --app="/arr" --exec
Copied!
See read-array for more examples.
Array
get-array
new-array
purge-array
read-array
resize-array
write-array
See all
documentation
Write fifo
Purpose: Write key/value pair into a FIFO list.
write-fifo <list> \
key <key> \
value <value>
Copied!
write-fifo adds a pair of key/value pointers to the FIFO <list>, specified with strings <key> and <value> (in "key" and "value" clauses, collectively called an "element").
It always adds elements to the end of the list.
new-fifo nf
write-fifo nf key "mykey" value "myvalue"
Copied!
FIFO
delete-fifo
new-fifo
purge-fifo
read-fifo
rewind-fifo
write-fifo
See all
documentation
Write file
Purpose: Write to a file.
write-file <file> | ( file-id <file id> ) \
from <content> \
[ length <length> ] \
[ ( position <position> ) | ( append [ <append> ] ) ] \
[ status <status> ]
Copied!
This is a simple method of writing a file. File named <file> is opened, data written, and file is closed. <file> can be a full path name, or a path relative to the application home directory (see directories).
write-file writes <content> to <file>. If "append" clause is used without boolean variable <append>, or if <append> evaluates to true, the <content> is appended to the file; otherwise the file is overwritten with <content>, unless "position" clause is used in which case file is not overwritten and <content> is written at byte <position> (with 0 being the first byte). Note that <position> can be beyond the end of file, in which case null-bytes are written between the current end of file and <position>.
File is created if it does not exist (even if "append" is used), unless "position" clause is used in which case file must exist.
If "length" is not used, then a whole string <content> is written to a file, and the number of bytes written is the length of that string. If "length" is specified, then exactly <length> bytes are written.
If "status" clause is used, then the number of bytes written is stored to <status>, unless error occurred, in which case <status> has the error code. The error code can be GG_ERR_POSITION (if <position> is negative or file does not support it), GG_ERR_WRITE (if there is an error writing file) or GG_ERR_OPEN if file cannot be open. Note that no partial data will be written; if all of data cannot be written to the file, then none will be written, and in that case an error of GG_ERR_WRITE will be reported in <status>.
This method uses a <file id> that was created with open-file. You can then write (and read) file using this <file id> and the file stays open until close-file is called or the request ends.
If "position" clause is used, then data is written starting from byte <position>, otherwise writing starts from the current file position determined by the previous reads/writes or as set by using "set" clause in file-position. After each read or write, the file position is advanced by the number of bytes read or written. Position can be set passed the last byte of the file, in which case writing will fill the space between the current end of file and the current position with null-bytes.
If "length" is not used, then a whole string is written to a file, and the number of bytes written is the length of that string. If "length" is specified, then exactly <length> bytes are written.
If "append" clause is used without boolean variable <append>, or if <append> evaluates to true, then file pointer is set at the end of file and data written.
If "status" clause is used, then the number of bytes written is stored to <status>, unless error occurred, in which case <status> has the error code. The error code can be GG_ERR_POSITION (if <position> is negative or file does not support it), GG_ERR_WRITE (if there is an error writing file) or GG_ERR_OPEN if file is not open. Note that no partial data will be written; if all of data cannot be written to the file, then none will be written, and in that case an error of GG_ERR_WRITE will be reported in <status>.
To overwrite file "/path/to/file" with "Hello World":
write-file "/path/to/file" from "Hello World"
Copied!
To append "Hello World" to file:
set-string path="/path/to/file"
set-string cont="Hello World"
write-file path from cont append
Copied!
To write only 5 bytes (i.e. "Hello") and get status (if successful, number "st" would be "5"):
set-string cont="Hello World"
write-file "file" from cont length 5 status st
Copied!
To write a string "Hello" at byte position 3 in the existing "file":
set-string cont="Hello"
write-file "file" from cont position 3 status st
Copied!
See open-file for an example with "file-id" clause.
Files
close-file
copy-file
delete-file
file-position
file-storage
file-uploading
lock-file
open-file
read-file
read-line
rename-file
stat-file
temporary-file
uniq-file
unlock-file
write-file
See all
documentation
Write index
Purpose: Insert a key/value pair into an index.
write-index <index> key <key> value <value> \
[ status <status> ] \
[ new-cursor <cursor> ]
Copied!
write-index inserts string <key> and associated string <value> into <index> created by new-index.
If <key> already exists in <index>, <status> (in "status" clause) will be GG_ERR_EXIST and nothing is inserted into <index>, otherwise it is GG_OKAY.
If "new-cursor" clause is used, then a <cursor> will be positioned on a newly inserted index node. You can use use-cursor to iterate to nodes with lesser and greater key values.
Insert key "k" with value "d" into "myindex", and obtain status in "st":
write-index myindex key k value d status st
Copied!
The following is an example of a process-scoped index. Such a index keeps its data across the requests, for as long as the process is alive.
In a new directory, create file indexsrv.gliim and copy to it:
%% /indexsrv public
do-once
new-index t process-scope
end-do-once
get-param op
get-param key
get-param data
if-true op equal "add"
write-index t key (key) value data status st
if-true st equal GG_OKAY
@Added [<<p-out key>>]
else-if
@Key exists
end-if
else-if op equal "delete"
delete-index t key (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found [<<p-out key>>]
else-if
@Deleted [<<p-out val>>]
delete-string val
end-if
else-if op equal "query"
read-index t equal (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found, queried [<<p-out key>>]
else-if
@Value [<<p-out val>>]
end-if
end-if
%%
Copied!
Create new application ("pi" for "process index"):
sudo mgrg -i -u $(whoami) pi
Copied!
Build application:
gg -q
Copied!
Run the index service:
mgrg -w 1 pi
Copied!
Try it out, add key/value pairs, query, delete, query again:
# Add key=1 and data=d1
$ gg -r --req="/indexsrv/op=add/key=1/data=d1" --service --exec --silent-header
Added [1]
# Add key=2 and data=d2
$ gg -r --req="/indexsrv/op=add/key=2/data=d2" --service --exec --silent-header
Added [2]
# Query key=1
$ gg -r --req="/indexsrv/op=query/key=1" --service --exec --silent-header
Value [d1]
# Query key=2
i$ gg -r --req="/indexsrv/op=query/key=2" --service --exec --silent-header
Value [d2]
# Delete key=2
$ gg -r --req="/indexsrv/op=delete/key=2" --service --exec --silent-header
Deleted [d2]
# Query key=2
$ gg -r --req="/indexsrv/op=query/key=2" --service --exec --silent-header
Not found, queried [2]
Copied!
See read-index for more examples.
Index
delete-index
get-index
new-index
purge-index
read-index
use-cursor
write-index
See all
documentation
Write lifo
Purpose: Write key/value pair into a LIFO list.
write-lifo <list> \
key <key> \
value <value>
Copied!
write-lifo adds a pair of key/value to the LIFO <list>, specified with strings <key> and <value> (in "key" and "value" clauses, collectively called an "element").
It always adds an element so that the last one written to <list> would be the first to be read with read-lifo.
new-lifo nf
write-lifo nf key "mykey" value "myvalue"
Copied!
LIFO
delete-lifo
new-lifo
purge-lifo
read-lifo
rewind-lifo
write-lifo
See all
documentation
Write list
Purpose: Write key/value pair into a linked list.
write-list <list> key <key> \
value <value> \
[ append [ <append> ] ]
Copied!
write-list adds a pair of key/value strings to the linked <list>, specified with <key> and <value> (in "key" and "value" clauses, collectively called an "element").
The key/value pair is added just prior to the list's current position, thus becoming a current element.
If "append" clause is used without boolean variable <append>, or if <append> evaluates to true, then the element is added at the end of the list, and the list's current element becomes the newly added one.
Add a key/value pair to the end of the list:
new-list mylist
write-list mylist key "mykey" value "myvalue" append
Copied!
The following is a list that is process-scoped, i.e. it is a linked-list server, which can add, delete, read and position to various elements:
%% /llsrv public
do-once
new-list t process-scope
end-do-once
get-param op
get-param key
get-param data
if-true op equal "add"
write-list t key (key) value data append
@Added [<<p-out key>>] value [<<p-out data>>]
else-if op equal "delete"
position-list t first
read-list t key (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found
else-if
@Deleted key [<<p-out key>>], [<<p-out val>>]
delete-list t
end-if
else-if op equal "next"
position-list t next status st
if-true st equal GG_OKAY
@Okay
else-if
@Not found
end-if
else-if op equal "last"
position-list t last status st
if-true st equal GG_OKAY
@Okay
else-if
@Not found
end-if
else-if op equal "previous"
position-list t previous status st
if-true st equal GG_OKAY
@Okay
else-if
@Not found
end-if
else-if op equal "first"
position-list t first status st
if-true st equal GG_OKAY
@Okay
else-if
@Not found
end-if
else-if op equal "query"
read-list t key (key) value val status st
if-true st equal GG_ERR_EXIST
@Not found
else-if
@Key [<<p-out key>>], value [<<p-out val>>]
end-if
else-if op equal "purge"
purge-list t
end-if
%%
Copied!
Create application:
sudo mgrg -i -u $(whoami) linkserver
Copied!
Start the server:
mgrg -w 1 linkserver
Copied!
Try it out:
gg -r --req="/llsrv/op=add/key=1/data=1" --exec --service
gg -r --req="/llsrv/op=add/key=2/data=2" --exec --service
gg -r --req="/llsrv/op=query" --exec --service
gg -r --req="/llsrv/op=previous" --exec --service
gg -r --req="/llsrv/op=query" --exec --service
Copied!
Linked list
delete-list
get-list
new-list
position-list
purge-list
read-list
write-list
See all
documentation
Write message
Purpose: Write key/value to a message.
write-message <message> key <key> value <value>
Copied!
write-message will append to <message> a key/value pair, in the form of string <key> (in "key" clause) and string <value> (in "value" clause).
<message> must have been created with new-message. In order to use write-message, a message must not have been read from (see read-message).
new-message msg
write-message msg key "key1" value "value1"
Copied!
Messages
get-message
new-message
read-message
SEMI
write-message
See all
documentation
Write string
Purpose: Create complex strings.
write-string <string>
<any code>
end-write-string [ notrim ]
Copied!
Output of any Gliimly code can be written into <string> with write-string. In between write-string and end-write-string you can write <any Gliimly code>. For instance you can use database queries, conditional statements etc., just as you would for any other Gliimly code.
Note that instead of write-string you can also use a shortcut "((" (and instead of end-write-string you can use "))" ), for example here a string "fname" holds a full path of a file named "config-install.gliim" under the application home directory (see directories):
get-app directory to home_dir
(( fname
@<<p-out home_dir>>/config-install.gliim
))
Copied!
Just like with all other Gliimly code, every line is trimmed both on left and write, so this:
(( mystr
@Some string
))
Copied!
is the same as:
(( mystr
@Some string <whitespaces>
))
Copied!
write-string (or "((") statement must always be on a line by itself (and so does end-write-string, or "))" statement). The string being built starts with the line following write-string, and ends with the line immediately prior to end-write-string.
All trailing empty lines are removed, for example:
(( mystr
@My string
@
@
))
Copied!
the above string would have two trailing empty lines, however they will be removed. If you want to skip trimming the trailing whitespaces, use "notrim" clause in end-write-string.
- Simple
A simple example:
set-string my_str="world"
set-string my_str1="and have a nice day too!"
write-string result_str
@Hello <<p-out my_str>> (<<p-out my_str1>>)
end-write-string
p-out result_str
Copied!
The output is
Hello world (and have a nice day too!)
Copied!
- Using code inside
Here is using Gliimly code inside write-string, including database query and conditional statements to produce different strings at run-time:
get-param selector
set-string my_str="world"
write-string result_str
if-true selector equal "simple"
@Hello <<p-out my_string>> (and have a nice day too!)
else-if selector equal "database"
run-query @db="select name from employee" output name
@Hello <<p-out name>>
@<br/>
end-query
else-if
@No message
end-if
end-write-string
p-out result_str
Copied!
If selector variable is "simple", as in URL
https://mysite.com/<app name>/some-service?selector=simple
Copied!
the result is
Hello world (and have a nice day too!)
Copied!
If selector variable is "database", as in URL
https://mysite.com/<app name>/some-service?selector=database
Copied!
the result may be (assuming "Linda" and "John" are the two employees selected):
Hello Linda
<br/>
Hello John
<br/>
Copied!
If selector variable is anything else, as in URL
https://mysite.com/<app name>/some-service?selector=something_else
Copied!
the result is
No message
Copied!
- Using call-handlers calls inside
The following uses a call-handler inside write-string:
set-string result_str=""
write-string result_str
@<<p-out "Result from other-service">> is <<call-handler "/other-service">>
end-write-string
p-out result_str
Copied!
The "other-service" may be:
begin-handler /other-service public
@"Hello from other service"
end-handler
Copied!
The output:
Result from other-service is Hello from other service
Copied!
- Nesting
An example to nest write-strings:
write-string str1
@Hi!
write-string str2
@Hi Again!
end-write-string
p-out str2
end-write-string
p-out str1
Copied!
The result is
Hi!
Hi Again!
Copied!
Strings
copy-string
count-substring
delete-string
lower-string
read-split
replace-string
set-string
split-string
string-length
trim-string
upper-string
write-string
See all
documentation
Copyright (c) 2019-2024 Gliim LLC. All contents on this web site is "AS IS" without warranties or guarantees of any kind.