I was using LittleSnitch years ago and installed it again yesterday. LittleSnitch is like a neat little firewall (basically it’s a socket filter for macos) – but it also now has this neat map in a great design showing at a glance, where your data is going to geographically:

Where is my data going to?

Its funny to see, how some apps are even sending data home where they were not supposed to. I don’t get why deepl, my favourite AI translator, hosts its data in the US, wheras its just a kilometer away from my office in cologne. They should not.

LittleSnitch comes in a „silent mode“ which by default allows all traffic, but sends you a notification as soon as a „new“ connection is established – and you can then decide if you want to allow or forbid that.


ChatGPT and SQL

ChatGPT and SQL seem to be a natural ally: one can write queries in natural language using ChatGPT and then getting the SQL code to edit in your favorite SQL editor. In my case, I am a keen user of Metabase for doing any kind of BI stuff directly in the database and from there modifying the queries with ChatGPT. What I find _very_ special is, that ChatGPT makes mistakes – and bluntly apologizes for it and corrects itself. That’s amazing. 

ChatGPT apologizing and correcting itself
IT life

Does it make sense to have both a private and a computer for work?

I am in front of my computer most of the day. I am using a Macbook Pro with Studio Display, Magic Mouse and Magic Keyboard. My Software setup is as follows:

  1. I am using Apple Mail since ages – its fast and I really love the search function
  2. I recently switched to Visual Studio for coding due to the discontinuity of Atom
  3. Google Workspace for most Office related stuff
  4. Excel for calculating things
  5. I am fully in Safari, embracing the deep integration of KeyChain
  6. Apple Calendar and Reminder for ToDo stuff
  7. SmartGit for the Versioning
  8. Jira and Freshdesk for Ticketing
  9. Mastonaut for Posting on Mastodon
  10. Whatsapp Desktop Client
  11. and Slack for keeping it al together

    Thing is – when I want to switch to „private“ mode and use the computer e.g. to make music with Logic Pro or Edit my Photos in Lightroom, there’s always the feeling of „work is too close“. I know I can have mulitple users on the machine, but it „feels“ work.

    Therefore I am thinking of adding a private machine, so that I can put the Macbook aside on weekends or holidays and „enjoy“ private computer time on… a mac mini? How is your setup? Do you have a separate machine for work and private stuff?

    AI IT

    How does an AI Strategy fit into an IT and Business strategy?

    An AI strategy is a plan for how an organization will use artificial intelligence to achieve its goals. It fits into a business strategy by identifying specific business problems that AI can help solve, and outlining the steps that will be taken to implement AI solutions. The AI strategy also fits into an IT strategy by outlining the technology and infrastructure that will be needed to support the AI solutions.

    An AI Stratgey is part of an IT and Business Strategy

    Example Retail

    For example, a retail company may use AI to improve its customer service by implementing a chatbot that can answer customer questions and help them find products. In this case, the AI strategy would be a part of the company’s overall business strategy to improve customer satisfaction. The IT strategy would need to include the implementation of the necessary technology, such as the chatbot software, and the integration of the chatbot with the company’s existing systems.

    Example Healthcare

    Another example, a healthcare company may use AI to improve patient outcomes by developing predictive models that can identify patients at high risk of certain conditions. In this case, the AI strategy would be a part of the company’s overall business strategy to improve patient care. The IT strategy would need to include the implementation of the necessary technology, such as the predictive modeling algorithms and the necessary integration with the company’s existing systems.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI business IT

    Prediction: in 2023 we will finally see the beginning of a wider business adoption of machine learning and AI Services – and here’s why

    At the end of 2022 ChatGPT made its way into the news and created a lot of fuzz.

    The reason was, that OpenAI, the company behind ChatGPT, developed a new frontend for its generative learning model GPT-3. GPT-3 was being released one year earlier and has already been the largest model ever created. It was only accessible by an API though, for which one needed to make it through a waiting list. ChatGPT changed the game as being an easy to use, free for everybody „chat“ interface to interact with the GPT-3. Many users for the first time understood, what Machine Learning, or AI Services, are capable of: they created poems, let ChatGPT write yet another StarWars movie script and many other funny things. But understanding the underlying achievements OpenAI was able to come up to, are nothing less than stunning – and will teach many businesses what benefits AI Services can bring.

    ChatGPT has made visible the potential that AI services have when they are skillfully combined, or the models that have technically been around for years are trained in a set with data that was previously unthinkable. GPT-3 contains about 10x as much data as previous models. More specifically, GPT-3 consists of multiple models and techniques like semi-supervised learning or trasnformers, that have been combined together intelligently – and that’s the fascinating part.

    Generally, until now, there were a number of „capabilities“ that an AI model brought to the table, e.g. the classics like sentiment analysis („What is the sentiment in a certain text?“) or classification („Is the text a question or a statement?“).
    This is now different: GPT-3 can not only do the above, but also learn new things very quickly with high efficiency and accuracy. This is called the Zero-, One- or Few-Shot capabilities of a model. Here GPT-3 achieves incredibly good values. This means, for example, that you can teach it to translate into a new language in just 3 „training sessions“, and from then on the model does it itself.

    Why this is so important for companies: the ability to (autonomously) learn and adapt.

    Every company claims to be unique. This may be the case in some areas, but often it is the cross-functional areas (IT, HR, Finance, etc.) that are essentially the same. The HR department of a bank does not do much different than the HR department of an automotive supplier. This also explains the success of the „general“ office products like Excel and Co. that are used in all companies (a spreadsheet like Excel, by the way, can be compared structurally well with an AI model). But WHAT is calculated in an Excel, that changes from company to company.
    Modern AI architectures like GPT-3 are now able to learn exactly this by themselves:
    1. what is my company specific data to work on?
    2. what are my company-specific questions that I should answer?
    3. what are my company-specific added values that I should deliver?

    These capabilities, which ChatGPT now presents to users in a very concrete way, are what will now drive the entry of AI into companies. Because the above results are simply „shocking“ in a positive sense.
    I see three areas in particular where we will see AI services much more often very soon:
    1. integrated AI: e.g. directly integrated in a software to make predictions (besipiel Salesforce AI service that directly qualifies a lead).
    2. standalone AI services (e.g. ChatBot that answers customer service questions on its own)
    3. generating AI services: Corporate communications, marketing copytexts, sales presentations that a service creates autonomously and is only approved or tuned afterwards by a „real“ employee.

    The productivity gains are enormous and the knowledge about the introduction of AI services, which skills and teams are needed, will also spread. Because one thing should be clear to everyone: AI Services are far more than a technical tool that can be introduced, but to an even much greater extent a corporate change than all „digitization measures“ combined. Digitization, compared, was a wet fart 

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI IT

    Going Deeper: how to build and train your own models using neural networks with PyTorch or TensorFlow

    First of all, Deep learning is a subfield of machine learning that involves using neural networks to build models that can process and make predictions on data. These neural networks are typically composed of multiple layers, with the first layer receiving input data and each subsequent layer building on the previous one to learn increasingly complex representations of the data.

    Technically, deep learning models are trained by presenting them with large amounts of data and adjusting the model’s parameters to minimize a loss function, which measures the difference between the model’s predicted output and the correct output. This process is known as gradient descent, and it typically involves using algorithms such as backpropagation to compute the gradient of the loss function with respect to the model’s parameters.

    In contrast to machine learning, there’s no manual feature classification on the input data needed:

    In contrast to Machine Learning (ML), Features do not have to be marked manually in Deep Learning (DL). Deep Learning Algorithms are capable of identifying features themselves and identify this example as „house of Nikolaus“

    Here is an example of code for training a deep learning model using the PyTorch library:

    # Import the necessary PyTorch modules
    import torch
    import torch.nn as nn
    import torch.optim as optim
    # Define the neural network architecture
    class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.fc1 = nn.Linear(10, 32)
            self.fc2 = nn.Linear(32, 64)
            self.fc3 = nn.Linear(64, 128)
            self.fc4 = nn.Linear(128, 10)
        def forward(self, x):
            x = self.fc1(x)
            x = nn.functional.relu(x)
            x = self.fc2(x)
            x = nn.functional.relu(x)
            x = self.fc3(x)
            x = nn.functional.relu(x)
            x = self.fc4(x)
            return x
    # Create an instance of the neural network
    net = Net()
    # Define the loss function and the optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01)
    # Train the model
    for epoch in range(100):
        # Iterate over the training data
        for inputs, labels in train_data:
            # Clear the gradients
            # Forward pass
            outputs = net(inputs)
            # Compute the loss and the gradients
            loss = criterion(outputs, labels)
            # Update the model's parameters

    This code creates a neural network with four fully-connected (fc) layers, trains it on some training data using stochastic gradient descent (SGD), and optimizes the model’s parameters to minimize the cross-entropy loss. Of course, this is just a simple example, and in practice you would want to use more sophisticated techniques to train your deep learning models.

    A basic code example using TensorFlow to define and train a deep learning model may look like this:

    # Import necessary TensorFlow libraries
    import tensorflow as tf
    from tensorflow.keras import layers
    # Define the model architecture
    model = tf.keras.Sequential()
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(10, activation='softmax'))
    # Compile the model with a loss function and an optimizer
    # Load the training data and labels
    train_data = ...
    train_labels = ...
    # Train the model on the training data
    model.fit(train_data, train_labels, epochs=5)

    In this code example, the first two lines import the necessary TensorFlow libraries for defining and training a model.

    The next three lines define the architecture of the model using the Sequential class and the Dense layer. The model has three dense layers with 64 units each, using the ReLU activation function for the first two layers and the softmax activation function for the final layer.

    The compile method is used to specify the loss function and optimizer for training the model. In this case, we are using the SparseCategoricalCrossentropy loss function and the Adam optimizer.

    Next, the training data and labels are loaded and the fit method is used to train the model on the data for 5 epochs. This will run the training process and update the model’s weights to improve its performance on the training data.

    Once the model is trained, it can be used to make predictions on new, unseen data. This can be done with the predict method, as shown in the following example:

    Copy code# Load the test data
    test_data = ...
    # Make predictions on the test data
    predictions = model.predict(test_data)

    In this code, the test data is loaded and passed to the predict method of the trained model. The method returns the predicted labels for the data, which can then be compared to the true labels to evaluate the model’s performance.

    PyTorch or Tensorflow?

    Whether you want to use PyTorch or Tensorflow for creating, training and asking your neural network, might be based on personal or usecase related preferences, but there are some subtle differences to it:

    1. Ease of use: PyTorch is generally considered to be more user-friendly than TensorFlow, particularly for tasks such as building and training neural networks. PyTorch provides a high-level interface for defining and training models, while TensorFlow can be more verbose and require more boilerplate code.
    2. Performance: TensorFlow is generally considered to be more efficient and scalable than PyTorch, particularly for distributed training and serving models in production. TensorFlow also has a number of tools and libraries for optimizing performance, such as the XLA compiler and TensorRT.
    3. Community: TensorFlow has a larger and more established community, with more resources and support available online. PyTorch is a newer framework and is rapidly growing in popularity, but it may not have as much support as TensorFlow.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI business IT

    How Tensorflow can help HR departments to streamline their processes

    TensorFlow is a powerful open-source tool funded by Google, that can help HR departments in a variety of ways. At its core, TensorFlow is a machine learning platform that allows users to build and train complex models using large amounts of data. This ability to process large amounts of data quickly and accurately makes TensorFlow an ideal tool for HR departments looking to improve their processes and make more informed decisions.

    Tensorflow and Recruiting

    One of the key ways that TensorFlow can help HR departments is by automating and improving the process of recruitment and selection. By training a model on large amounts of data (e.g. from SAP SuccessFactors, Workday etc.), HR departments can use TensorFlow to identify the most important factors in determining a successful candidate and automate the process of sifting through resumes and applications. This can save HR departments a significant amount of time and resources, and allow them to focus on other important tasks.

    TensorFlow and Performance Management

    Another area where TensorFlow can be useful for HR departments is in performance management. By training a model on data about an employee’s past performance, HR departments can use TensorFlow to identify patterns and trends that may indicate an employee’s potential for future success. This can help HR departments make more informed decisions about promotions, salary increases, and other important decisions related to employee performance.

    TensorFlow can also be used to improve the accuracy and fairness of salary and compensation decisions. By training a model on data about an employee’s past performance, job responsibilities, and other factors, HR departments can use TensorFlow to identify any potential biases or inconsistencies in their current compensation practices. This can help HR departments ensure that their compensation decisions are fair and based on objective criteria, and can help to prevent discrimination and other potential legal issues.

    TensorFlow and Reportings

    In addition to these specific applications, TensorFlow can also help HR departments in more general ways. For example, TensorFlow can be used to automate and improve the process of generating reports and analytics, which can help HR departments make more informed decisions about the effectiveness of their policies and practices. Additionally, TensorFlow can be used to identify potential issues and trends within an organization, such as high turnover rates or low employee satisfaction, and provide HR departments with the information they need to address these issues.

    TensorFlow to identify potential leaving employees

    Traditional methods of predicting employee turnover often rely on manual analysis of a small number of data points, such as employee performance reviews or exit interviews. This can be time-consuming and may not provide a complete picture of an employee’s likelihood of leaving the company.

    TensorFlow, on the other hand, can analyze vast amounts of data from various sources, including employee performance data, demographics, and other relevant factors. This allows HR departments to gain a more comprehensive view of an employee’s likelihood of leaving the company, enabling them to make more informed decisions about retention strategies. Traditional methods of predicting employee turnover may not be able to identify subtle patterns or trends that could be indicative of an employee’s likelihood of leaving the company. TensorFlow, on the other hand, can identify these patterns and trends, providing HR departments with valuable insights into the factors that may be contributing to employee turnover.

    From Re-Action to Action: act on an employee, before he leaves.

    One example of how TensorFlow can be used in the area of employee turnover prediction is through the development of a predictive model. This model could be trained using a large dataset of employee data, including factors such as performance metrics, demographics, and job satisfaction. The model could then be used to predict the likelihood of an individual employee leaving the company, based on the data provided: the model may identify that employees with low job satisfaction are more likely to leave the company. HR departments could then implement strategies to improve job satisfaction, such as offering training or career development opportunities, in an effort to reduce employee turnover.Another potential in the area of employee turnover prediction is through the development of an employee turnover dashboard. This dashboard could provide HR departments with a visual representation of employee turnover data, allowing them to easily identify trends and patterns. The dashboard could also provide HR departments with real-time alerts when an employee is at risk of leaving the company, allowing them to take immediate action to retain the employee.

    TensorFlow vs. Azure Cognitive Services in HR processes

    As stataed above, TensorFlow but also Azure Cognitive Services are both powerful tools for machine learning and artificial intelligence (AI) applications. While TensorFlow is an open-source library for machine learning and deep learning applications, Azure Cognitive Services is a suite of AI services provided by Microsoft. Both tools have their own advantages and disadvantages, which should be considered when deciding which to use for a particular project.

    One major advantage of TensorFlow is its flexibility. TensorFlow allows developers to build and train their own custom machine learning models, which can be tailored to specific applications and data sets. This flexibility can be particularly useful for complex projects that require specialized models or algorithms.

    Another advantage of TensorFlow is its ability to handle large amounts of data. TensorFlow is designed to scale to large data sets, allowing it to handle large volumes of data without sacrificing performance. This makes it ideal for projects that require the analysis of large amounts of data, such as natural language processing or image recognition.

    However, TensorFlow also has some disadvantages. One of the main disadvantages of TensorFlow is its complexity. TensorFlow is a powerful tool, but it can be difficult for beginners or unexperienced IT deaprtments to learn and use. In order to use TensorFlow effectively, developers need to have a strong understanding of machine learning algorithms and techniques, as well as experience with programming languages such as Python.

    In contrast, Azure Cognitive Services is a more user-friendly tool. Azure Cognitive Services provides pre-trained machine learning models that can be easily integrated into applications without the need for extensive programming knowledge. This makes it a good choice for developers who are new to machine learning or who want to quickly add AI capabilities to their applications.

    Another advantage of Azure Cognitive Services is its availability. Azure Cognitive Services is available as a cloud-based service, which means that developers can easily access and use the service without the need to install any software or hardware. This can be particularly useful for developers who are working on projects that require fast deployment or who do not have access to dedicated machine learning hardware.

    However, Azure Cognitive Services also has some disadvantages. One major disadvantage of Azure Cognitive Services is its cost. Azure Cognitive Services is a subscription-based service, which means that developers need to pay for the service on a monthly or annual basis. This can be expensive, especially for projects that require the use of multiple Azure Cognitive Services.

    Another disadvantage of Azure Cognitive Services is its lack of flexibility. Because Azure Cognitive Services provides pre-trained models, developers are limited to using the models that are provided by the service. This can be limiting for projects that require custom models or algorithms.

    In conclusion, TensorFlow and Azure Cognitive Services are both powerful tools for machine learning and AI applications. TensorFlow offers flexibility and the ability to handle large amounts of data, but it can be complex and difficult to use. Azure Cognitive Services is user-friendly and available as a cloud-based service, but it can be expensive and lacks flexibility. The best choice between the two will depend on the specific requirements of the HR project and the experience and expertise of the development team.

    In my company my-vpa.com, which basically is a HR Tech company, we mainly use Azure and AWS Comprehend for our HR processes. So for example we implememented an AI powered zero-touch recruiting process which is capable of recruiting up to 200 Assistants per month.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI business IT

    Ai and the importance of Data Governance

    Data governance and AI are two important concepts that are closely related and can work together in an enterprise to improve the efficiency and effectiveness of business operations. Let me lay out, why there’s no AI powered process without proper data Governance (DG):

    What is data Governance?

    At a high level, data governance refers to the processes and policies that are put in place to manage and oversee the collection, storage, and use of data within an organization. This can include defining roles and responsibilities for data management, establishing standards and protocols for data quality and security, and implementing systems for monitoring and auditing data usage.

    In an enterprise, AI and DG can work together in several ways: For example, data governance can help ensure that the data used for AI models is of high quality and is properly managed and protected. This can involve implementing processes for verifying the accuracy and completeness of the data, as well as setting up systems for securing the data and monitoring its usage.

    Additionally, data governance can help to ensure that the AI models being used by the enterprise are fair, ethical, and transparent. This can involve establishing guidelines and protocols for evaluating the performance and biases of AI models, as well as implementing systems for monitoring and auditing their usage.

    Here are some examples of how data governance and AI can be integrated in an enterprise:

    • Developing a comprehensive data strategy that outlines the goals and objectives of the organization’s AI initiatives, as well as the roles and responsibilities of various teams and individuals involved in data management and AI development.
    • Establishing clear policies and guidelines for the collection, storage, and use of data, including guidelines for data quality, security, and privacy.
    • Implementing processes for data access and decision-making that ensure that data is used consistently and ethically, and that the organization’s AI models are trained and evaluated on a diverse and representative dataset.
    • Establishing a data governance board or committee that is responsible for overseeing the organization’s data governance and AI initiatives, and for making decisions about the use of AI in the organization.
    • Implementing regular training and education programs for employees on topics related to data governance and AI, to ensure that everyone in the organization is aware of the organization’s policies and practices.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI IT my-vpa

    AI and Workflow automation – how do they work together?

    AI services like Azure, TensorFlow, and Comprehend are becoming increasingly popular among enterprises, as they offer a wide range of benefits. These services can be used to improve workflow automation, allowing businesses to streamline their processes and make them more efficient.

    One of the key ways in which AI services can work together with enterprise workflow automation solutions is through the use of standard APIs. E.g. n8n.io provides a standard integration to AWS comprend.


    One example use case for this type of integration is in the field of customer service. AI services like Azure and Comprehend can be used to analyze customer feedback and identify common issues or areas for improvement. This information can then be fed into a workflow automation system, which can automatically route the feedback to the appropriate team or individual for further action – with workflow platforms the information can also directly be fed into tools like slack or mattermost

    Another example use case is in the field of finance. AI services like TensorFlow can be used to analyze financial data and identify trends or anomalies that may indicate potential issues or opportunities. This information can then be fed into a workflow automation system, which can automatically generate alerts or take other actions as needed.

    Integrations of AI services like Azure, TensorFlow, and Comprehend with enterprise workflow automation like solutions can provide a range of benefits, including increased efficiency, improved customer service, and better decision-making. Standard APIs make it easy to integrate these services into existing systems, providing additional capabilities and enabling businesses to get the most out of their workflow automation solutions. Over at my-vpa.com we are running a large n8n farm in order to automate tasks for ourselves and our customers.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    AI IT

    Now that you want to integrate AI in your custom built software: which are the best OpenSource modelling tools out there?

    There are several open source AI model tools available, each with its own unique features and capabilities. Some of the most popular options include TensorFlow, Keras, PyTorch, and scikit-learn.

    TensorFlow is a powerful open source library for deep learning, developed by the Google Brain team. It allows users to build and train complex neural network models for a variety of tasks, including image recognition, natural language processing, and time series forecasting. TensorFlow is highly scalable and can be used for both research and production environments.

    Keras is a high-level API for building and training deep learning models. It is built on top of TensorFlow and is designed to be easy to use and intuitive for developers who are new to deep learning. Keras allows users to quickly prototype and experiment with different architectures and hyperparameters, making it a popular choice for researchers and data scientists.

    PyTorch is another popular open source library for deep learning. It is developed by Facebook AI Research and is designed to be flexible and easy to use. PyTorch allows users to build complex neural network models and perform computations on tensors, a data structure similar to matrices. PyTorch is known for its support for dynamic computational graphs, which allow users to build models on the fly and modify them during training.

    scikit-learn is a machine learning library for Python that is widely used in the data science community. It offers a wide range of algorithms for classification, regression, clustering, and dimensionality reduction, along with tools for model evaluation and selection. scikit-learn is designed to be easy to use and can be integrated with other libraries, such as NumPy and Pandas, to create powerful data analysis pipelines.

    All of the above are written in python. Pytorch is also ported for JAVA and C++. scikit-learn is also written in Python, but it is focused on traditional machine learning algorithms rather than deep learning.

    A difference is the level of abstraction provided by the libraries. TensorFlow and PyTorch offer low-level APIs that allow users to build and customize their own neural network architectures, while Keras provides a higher-level API that allows users to quickly build and train pre-defined architectures. scikit-learn offers a more general-purpose API for traditional machine learning algorithms.

    In terms of performance, TensorFlow, Keras, and PyTorch are all optimized for training deep learning models on large datasets and can be used to build models that can run on GPUs and TPUs. scikit-learn is optimized for smaller datasets and can run on CPUs, but it may not be as efficient for larger datasets.

    Hope this overview helps to find the model builder you want to go after 🙂

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de