8.8 KiB
Reverse Engineering Notes
For this research, NLS Docker-Setup is used.
[TOC]
Appliance
The instructions assume that Docker and NLS-Stack is installed.
Get / Copy file structure
- Create a target directory (e.g.
mkdir /opt/nls-files
) - Get the Container-ID of the NLS-Appliance (
docker ps
) - Copy files from container
docker cp -r <container-id->
About configuration data
- Most variables and configs are stored in
/var/lib/docker/volumes/configurations/_data
. - Config-Variables are in
etc/dls/config/service_env.conf
.
NLS Logs
Logs are found in /var/lib/docker/volumes/logs/_data
.
Most interesting logs are:
fileInstallation.log
serviceInstance.log
File manipulation
- Files can be copied with
docker cp <container-id>:/venv/... /opt/localfile/...
. - Files can be directly edited via Docker-Volume mounts
- see
df -h
(one is nls, the other postgres container)overlay 16G 11G 5.6G 66% /var/lib/docker/overlay2/<hash>/merged overlay 16G 11G 5.6G 66% /var/lib/docker/overlay2/<hash>/merged
- then you can edit files with e.g.
nano venv/lib/python3.12/site-packages/...
- see
After you edit any file directly, service needs to be restarted docker restart <container-id>
.
Database
DB Access
Valid users are dls_writer
and postgres
.
- Log in to your Docker-Server
- Get the Container-ID of the NLS-Postgres (
docker ps
) - Use psql
docker exec -it <container-id> psql -h localhost -U postgres
- (optional) Create a superuser for external access
CREATE USER admin WITH LOGIN SUPERUSER PASSWORD 'admin';
- Add exposed port to
docker-compose.yaml
5.1 Addports: [ '5432:5432' ]
into...
-section
Table Structure
Some information about Database / Table structure can be found database/Layout.md.
Logging / Stack Trace
Nginx
- NGINX uses
/opt/certs/cert.pem
and/opt/certs/key.pem
Other tools / files
Other tools / files which may can helpful, but not known for what they are used.
Appliance - master_pwd.bin
and master_pwd_v100.bin
There are two files in /var/lib/docker/overlay2/<nls-container-id>/merged/etc/dls/config
.
When base64-decoded both files have a length of 256 bytes.
If these files are renamed (adding .bak
) NLS stack will come up normally.
These files are static and don't change after resetting NLS-Instance
master_pwd.bin
RdX1Fng5fYUEq+hSvQcDPdZmKkLfEfVd9k6OU6BG0UpFz1s9fbT5H2fqPBcxFogg yFNvJnLizyGi0nSLEQQL/+wmYcTnFj0TI5svrKMXLLM2gED8ZozvTyKQNpVZssad QkLz7Y3qaSFG90OEXD0MU48SgUkfUF4jnyU2jxph72AuI5djjWV+fjfNM60q/nOb RUrn5hWm4qczga8N6eK6J8hOXNRm+9k2tT4PG+MuqhwdHxfdY9eCnNWVZWfqg3e9 OnRDt/ZZvBg8po4aP21Nobqr0UDkIrwbBa+b/gC6BTF/1/MwoLPmFFNlnnB8yJXs Gn0aj+WXNuH+Syt+BiNUcQ==
master_pwd_v100.bin
mht1iKDDsWrp/AVs2h5wpO14FKbIwAQW4yVKEYaDaPIgWQmapRqsm2gqIYgZ8nqb emRg51JtCdPRzKXFFKswae2wzLj1ldc4ksrVGRFyyh6Kn4bmtZs30tgwIRnuh2PT XbhVZTQVD2SbiWTBl7SjgVpuUnUdhJWVGnF/0ksMaB3GVlgnVULJnomaUlvgdeL0 NE4PacjTQsfXoRdbjAeE390PcK+Ax8OQATXA9AezXV3EnjRAapOBCRqCb3S+lOVJ oL0050bYWC85Ijb+SDR3fl1UxQf2XgaKKx2E0/gUy0EtCLtpwT4L0iS2hzKENxgd B0kFriXRTxdejBmBPl36jQ==
Appliance - site_key_uri.bin
This file also can be found in /var/lib/docker/overlay2/<nls-container-id>/merged/etc/dls/config
.
When base64-decoded this file has a length of 256 bytes.
If this file is renamed (adding .bak
) NLS stack will come up normally.
This file is static and don't change after resetting NLS-Instance
0a3MZny/w+hEduuSakCLM5ADlr9oKapdjIrZIM5A7mzq3e8I0UPVb9m6DOXlzJe8
wu+X+gWdIMjPED0GqqyNUQ3MlklaXE1jIvA7NBUeskSdSAACYEX6IZRNVQvSs2Yn
CFKijSQi1d33EmTmVKLwTEqEgD+SR494yDn6AEUZRqNuTMYyNhDYJoIdLons4RG6
m56oy1WRGSdHRiBt/6Mbb2I7BQ+YNsPrq9pI9wdPxbCbyT8UbEPM0/Qo4RSH77lx
+rtqhXsNqrciBxg9iCfYTDBKW7gG8+3U+7OFkPfN7nAYfxEAKKO0z/vPih0KIF12
ipX9bJaK63sIplYtPSBB2A==
Appliance
/etc/dls/config/site_key_uri.bin
/etc/dls/config/dls_db_password.bin
Database
/etc/dls/config/decryptor/decryptor
Other Code
Interesting is that for encryption the service_instance.deployment
Public-Key is used. For that one, we have no
private key.
see
public_key_string=si_deployment_public_key.value
`return_file_export_manager.py`
class ReturnFileExportManager:
def return_file_export_handler(self, event_args, params, dal):
if 'file_timestamp' not in event_args:
# file_timestamp not in event_args means original request on primary,
# so we get current time as file_timestamp
license_allocation_file_timestamp = datetime.utcnow()
# modify incoming event_args parameter to add file_timestamp,
# so broadcaster to sends file_timestamp to secondary
event_args['file_timestamp'] = license_allocation_file_timestamp
else:
# file_timestamp in event_args means replication call on secondary
# so we use file_timestamp from event_args
license_allocation_file_timestamp = event_args['file_timestamp']
license_allocation_file_xid = self.processor.get_license_file_xid()
log.info(f'Generating license allocation return file: {license_allocation_file_xid}')
# Generate license allocation data
license_allocation = LicenseAllocation()
license_allocation.header = LicenseAllocationHeader(params.license_allotment_xid)
log.info(f'Generating return for license allocation: {params.license_allotment_xid}')
license_allocation.object_list = self._get_object_list(params, dal)
try:
si_deployment_public_key = dal.get_si_artifact_for_license_allotment(
params.license_allotment_xid, si_constants.SERVICE_INSTANCE_DEPLOYMENT_NAMESPACE,
si_constants.ARTIFACT_NAME_PUBLIC_KEY
)
except NotFoundError as ex:
log.error(f'Error fetching artifacts for SI attached to this license allocation return file', ex)
raise BadRequestError("Failed to return license allocation file")
# Build license file payload string
encrypted_payload_str = self.processor.build_license_payload(
license_allocation_file_xid=license_allocation_file_xid,
license_allocation_file_timestamp=license_allocation_file_timestamp,
license_allocation_list=[license_allocation],
public_key_string=si_deployment_public_key.value,
deployment_token="")
# insert LAF record
dal.insert_file_creation_record(license_allocation_file_xid, license_allocation_file_timestamp,
params.license_allotment_xid, encrypted_payload_str)
response = ReturnFileResponse(return_license=encrypted_payload_str)
return response
`dls_license_file_installation_dal.py`
class DlsLicenseFileInstallationDal:
def insert_file_creation_record(self, schema, license_file_xid, license_file_timestamp, license_allotment_xid,
license_allocation_file, session=None):
insert_file_creation_record_query = f"""
insert into {schema}.license_allotment_file_publication (xid, license_allotment_xid, publication_detail)
values (:xid, :la_xid, :publication_detail)
on conflict (xid) do update
set license_allotment_xid = :la_xid, publication_detail = :publication_detail
"""
publication_detail_dict = {
'timestamp': license_file_timestamp.isoformat(),
'license': license_allocation_file,
}
publication_detail = json_dumps(publication_detail_dict)
session.execute(insert_file_creation_record_query, {'xid': license_file_xid, 'la_xid': license_allotment_xid,
'publication_detail': publication_detail})
Usefully commands on Client
Check licensing status
nvidia-smi -q | grep "License"
Output
vGPU Software Licensed Product
License Status : Licensed (Expiry: 2023-1-14 12:59:52 GMT)
Track licensing progress
- NVIDIA Grid Log:
journalctl -u nvidia-gridd -f
systemd: Started NVIDIA Grid Daemon.
nvidia-gridd: Configuration parameter ( ServerAddress ) not set
nvidia-gridd: vGPU Software package (0)
nvidia-gridd: Ignore service provider and node-locked licensing
nvidia-gridd: NLS initialized
nvidia-gridd: Acquiring license. (Info: license.nvidia.space; NVIDIA RTX Virtual Workstation)
nvidia-gridd: License acquired successfully. (Info: license.nvidia.space, NVIDIA RTX Virtual Workstation; Expiry: 2023-1-29 22:3:0 GMT)