I’ve run into situations where it was necessary to read the values of bash environment variables defined in a file into a python script.
Something like that is achieved in bash simply by doing:
source env.vars
or
. env.vars
Where the content of env.vars could be something like this:
BASE=/home/user/scripts BIN_DIR=$BASE/bin VAR1=xxxxx VAR2=yyyy
The solutions described here achieve that effect:
- http://stackoverflow.com/questions/17435056/read-bash-variables-into-a-python-script
But they require that every variable be exported in the source file. Some times this may not be possible due to file permissions or other restrictions.
The following solution takes care of that issue:
import re import subprocess from _collections import defaultdict def load_vars(file_name): variables = defaultdict(str) command = ['bash', '-xc', 'source %s' % file_name] proc = subprocess.Popen(command, stderr = subprocess.PIPE) reg = re.compile('^\+\+ (?P<name>\w+)\=(?P<value>.+)') for line in proc.stderr: m = reg.match(line) if m: name = m.group('name') value = '' if m.group('value'): value = m.group('value') variables[name] = value proc.communicate() return variables
This also works better than just reading and parsing the source file since bash will perform variable substitution before outputting anything, so cases like
VAR2=$VAR1/abcd
are possible.
For the example file above, the function would return:
defaultdict(<type 'str'>, {'VAR1': 'xxxxx', 'BASE': '/home/user/scripts', 'BIN_DIR': '/home/user/scripts/bin', 'VAR2': 'yyyy'})