CMake environment variable
CMAKE_BUILD_PARALLEL_LEVEL
can be manually set to control default number of build parallel threads.
Parallel builds are virtually always desired to save build and rebuild time.
As a starting point, perhaps set CMAKE_BUILD_PARALLEL_LEVEL environment variable to be equal to the number of physical or logical CPU cores by setting it in the user profile:
#!/bin/bash
if[[ x"${CMAKE_BUILD_PARALLEL_LEVEL}"== x ]]; thenn=8;
case"$OSTYPE" in
linux*)n=$(nproc);;
darwin*)n=$(sysctl -n hw.physicalcpu);;
bsd*)n=$(sysctl -n hw.ncpu);;
esacexport CMAKE_BUILD_PARALLEL_LEVEL=${n}fi
Or for Windows, in environment variable settings:
CMAKE_BUILD_PARALLEL_LEVEL=%NUMBER_OF_PROCESSORS%
If the computer runs out of RAM, reduce the specific command parallelism with the cmake --build --parallel N command line option.
For Ninja build systems, specific targets can control the number of workers with
job pools.
The purpose of compiler flag checks is to test if a flag is supported.
Metabuild system (such as CMake and Meson) compiler flag checks must not test the “-Wno-” form of a warning flag.
This is because several compilers including Clang, GCC, Intel oneAPI emit a “success” return code 0 for the “-Wno-” form of an unsupported flag.
From time to time, the topic of why meta-build systems like CMake and Meson are single-threaded sequentially executing processes is brought up.
With desktop workstations (not to mention build farms) having 32, 64, and even 128+ CPU cores increasingly widespread, and the configure / generation step of meta-build systems taking tens of seconds to a few minutes on large projects, developers are understandably frustrated by the lack of parallelism.
A fundamental issue with CMake, Meson and equivalently for other meta-build systems is that the user’s CMake scripts would then have to be dataflow / declarative versus imperative.
This would require reworking of script syntax and meta-build internal code radically.
In Meson, Python threading (single thread executes at one time) is used in subprojects, giving a speed boost in download time of subproject source code.
There is no Python multiprocessing or ProcessPoolExecutor in Meson configure step.
Meson parallel execution is for build (Ninja) and test.
Both build and test are also done in parallel in CMake.
For CMake, the ExternalProject steps can already be run in parallel (including download) via the underlying build system.
A way to speed up meta-build configure time–here specific to CMake–is to stuff the CMakeCache.txt file with precomputed values and/or use CMake Toolchain to do likewise, skipping configure tests when the host build system is static.
CMakeCache.txt stuffing is a technique Meson uses to speed up configure time of CMake-based subprojects from Meson projects.
Flake8
is a Python code style checker that can detect unexecuted syntax errors.
Python 3.12 with flake8 detects PEP8 formatting errors
inside f-strings
that are
not yet handled by Black.
Currently these formatting errors must be corrected by hand.
CI jobs should test with Python < 3.12 and Python >= 3.12 to ensure the f-string syntax is valid in older and newer Python versions.
However, it is often more convenient (if using care) to use terse variables that are not as specific:
if(APPLE)# macOS
elseif(BSD)# FreeBSD, NetBSD, OpenBSD, etc.
elseif(LINUX)# Linux
elseif(UNIX)# Linux, BSD, etc. (including macOS if not trapped above)
elseif(WIN32)# Windows
else() message(FATAL_ERROR"Unsupported system: ${CMAKE_SYSTEM_NAME}")endif()
John W. Eaton
continues to be heavily involved with GNU Octave development as seen in the
commit log.
The GNU Octave developer community has been making approximately yearly major
releases.
Octave is useful to:
run Matlab code to determine if it’s worth porting a function to Python
Octave allows running Matlab “.m” code without changes for many tasks.
“.m” code that calls proprietary toolboxes or advanced functions may not work in Octave.
I generally recommend learning and using Python unless one already has significant experience and a lot of code in Matlab.
Practically what happens is that we choose a “good enough” language.
What’s important is having a language that most other people are using so we can share results.
The team might be building a radar or robot or satellite imager, and what’s being used in those domains is C, C++, Matlab, and Python.
I want a data analysis language that can scale from Cortex-M0 to Raspberry Pi to supercomputer.
Yes, Matlab can use the Raspberry Pi as a target, works with software defined radio, etc.
Will collaborators have the “right” version of Matlab and the toolbox licenses to replicate results?
How can I debug 100 Raspberry Pi’s sitting out in a field?
I need to use the GPIO, SDR, do machine learning processing and forward packets, perhaps using coroutines.
Since 2014,
MicroPython
has been rapidly growing in the number of MPU/SoC it supports.
For just a few dollars, numerous IoT wireless modules can run an expansive subset of Python 3 including exception handling, coroutines, etc.
For rapid prototyping, one can get the prototype SoC running remote sensing code passed to the cloud before the first planning meeting.
Consider the higher-level languages ease of development and tools or inherent memory safety.
Like math systems such as
Sage,
SciLab allows integrating multiple numerical systems together.
However, SciLab is its own language–with convenient syntax, and a Matlab to SciLab converter.
SciLab, IDL, Mathematica, and Maple suffer from small audience size and limited number of third-party libraries.
If an SSH session hasn’t been used for a while or the laptop goes to sleep, the SSH session typically disconnects.
This leaves a frozen Terminal window that can’t be used.
Usually the
Ctrlc
keyboard combo does not work.
To avoid having to close the Terminal window, unfreeze the SSH client so that the same Terminal window can be used to reconnect to the SSH session.
This avoids needless rearranging when you’ve already got a desired tab/window layout for the Terminal.
In the Terminal windows, press these keys in sequence:
Python distutils was removed from Python 3.12 as proposed in
PEP632.
Setuptools 49.1.2 vendored distutils, but has experienced some
friction in setuptools 50.x
since so many packages
monkeypatch distutils
due to the little maintained status of distutils for several years.
With distutils deprecation in Python 3.10,
migration
to setuptools is a topic being worked on by major packages such as Numpy.
Aside from major packages in the Scipy/Numpy stack, I don’t recall many current packages relying on distutils. However, there is code in some packages using import distutils that could break.
I applaud the decision to remove distutils from Python stdlib despite the fallout.
The fallout is a symptom of the legacy baggage of Python’s packaging.
Non-stdlib packages like setuptools are so much more nimble that sorely needed improvements can be made more rapidly.
Compilers define macros that can be used to identify a compiler and platform from compiled code, such as C, C++, Fortran, et al.
This can be used for platform-specific or compiler-specific code.
If a significant amount of code is needed, it may be better to swap in different code files using the build system instead of lengthy #ifdef logic.
There are numerous examples for C and C++ so here we focus on macros of Fortran compilers.
Macro definitions are obtained in an OS-agnostic way by:
echo "" | gfortran -dM -E - > macros.txt
that creates a file “macros.txt” containing all the compiler macros.
commonly used macros to detect operating system / compiler configuration include:
_WIN32 1
__linux__ 1
__unix__ 1
__APPLE__ 1
CAUTION: these macros are actually not available in the Gfortran compiled programs as they are in GCC.
A workaround is to have the build system define these for the particular compiler, OS, etc.