More than just “We’re now doing everything with AI”

Experience shows that it is highly beneficial to continuously explore how (new) technology can help optimize internal processes in terms of quality, efficiency, and added value. This certainly applies to the broad field of Artificial Intelligence and everything it encompasses. Regardless of your industry, the question remains: how can the potential of technology be effectively leveraged? At Apps with love, we utilize well-established AI-assisted workflows while simultaneously experimenting with the latest possibilities. However, one principle remains central: responsibility rests 100% with us as humans.

It is also important to remember that AI is not a panacea. Many established processes and methods work efficiently and cannot simply be replaced by AI—and that is perfectly fine. The true art lies in identifying potentials that can be exploited with reasonable effort and regularly re-evaluating those possibilities. AI is not suitable for everything and is sometimes left out entirely or used only selectively. After all, using "AI for AI's sake" makes no sense.

As a digital agency, we understand the technology. As a service provider with insights into various companies and industries, we have developed a quick understanding of diverse business needs and are skilled at matching technological possibilities with real-world challenges and opportunities. We share our knowledge and are happy to provide consulting - including on the topic of AI.

How we use AI in practice:

  • Knowledge & documentation As you’d expect: for research, analysis, writing or documentation.

  • Software development Naturally, for writing and reviewing code. But also in requirements engineering and testing – for thinking through and prioritizing requirements to creating test cases and implementing (semi-)autonomous processes.

  • Processes & prototyping To analyze and improve internal workflows – but also in early project phases when the goal is to quickly make ideas tangible as prototypes.

To ensure that all of this is not only functional but also responsible, we have internal guidelines for handling AI tools, ranging from data protection requirements and team-wide settings to defined workflows. And because the field is constantly evolving, we regularly invest in internal professional development.

Our services and ways to make your product smarter, more personalized, more tailored

As impressive as ChatGPT, Sora or Nano-Banana may be: the question that arises with any technological solution is: what need does it address, and what tangible benefits does it deliver in the relevant context? This question is always the starting point for our work. AI or ML (Machine Learning) may then be the appropriate technologies for addressing these challenges and questions.

The potential applications of AI are wide-ranging, regardless of the specific problem to be solved. The effort involved and the level of complexity also vary greatly. Broadly speaking, the options range from simple to complex – although, of course, this always depends on the specific use case:

  • The simplest case: connecting an API – and voilà: a chatbot. This is what is commonly referred to as a ChatGPT wrapper – essentially an alternative interface for the AI models that can also be used elsewhere. Simply connecting an API is not enough in most cases; that’s where meta-prompting, limits, whitelisting and so on come into play.

  • The next level, particularly when there is a lot of existing data: RAG (Retrieval Augmented Generation) enhances the chatbot. Depending on the case, this can still be relatively straightforward, yet questions regarding technology architecture quickly arise, which require foresight and delicate touch to answer.

  • Is RAG not the right fit, but you have a database or existing tools that people should be able to interact with using natural language? In that case, the right solution might be to add an MCP server (MCP = Model Context Protocol). At this stage, at the very latest, detailed questions arise regarding processes, IT architecture and the system landscape. We’d be happy to answer these together with you.  

  • Finally, what is usually the least visible, but can offer no less added value: things that actually simplify users’ lives. In the simplest case, this could involve interacting with an app via voice commands (and yes: AI can now understand Swiss German too), but things get particularly exciting when it comes to individualization based on users’ interactions: Personalized suggestions or recommendations are just the first step here: what if a user interface automatically adapts to my needs because the app gets to know me and optimizes itself…? 👀 


Of course, we don’t think in terms of boxes and bullet points: when we develop a detailed concept, it usually turns out that a combination of some or all of the above is best. Let’s find out together!

In-house Claude Code workshop at Apps with love
In-house Claude Code training at Apps with love

The big question of models and data

We view artificial intelligence as a tool – a technology that, like any other, needs to be used correctly and consideration. In the vast landscape of models and APIs, clear insight is essential: How and where is data processed, and for what purpose? These questions are central, especially when dealing with personal or otherwise sensitive data. Depending on the specific case, we rely on the following options:

  1. For large language models (LLMs) or multimodal models in the cloud, we primarily use Microsoft Azure. Microsoft offers virtually all prominent "foundry models" (over 11,000 different ones) via its platform. These include well-known models such as GPT 5.3 or Whisper from OpenAI, Opus 4.6 or Sonnet 4.5 from Anthropic, and the models from Mistral. DeepSeek, Meta, the image-generation models FLUX from Black Forest Labs, along with many other smaller or specialized models are also available.

  2. However, the question often arises as to whether a cloud model is needed at all. After all, with our smartphones, we already carry surprisingly powerful AI computers in our pockets. These come with significant capabilities through ML Kit (Android) or Foundation Models (iOS/Apple). Functions such as image or text recognition can be performed entirely on-device and are trivial to implement today. For other or specialized use cases, smaller models can also be integrated directly into an app and shipped with it, or optionally downloaded by the user.

  3. If neither the cloud nor the on-device option is suitable due to requirements – for example, because the necessary model is too large for on-device processing, but the data is too sensitive for the cloud – then private cloud or on-premise solutions are required. While we are not the provider specializing in installing racks full of GPUs at our clients' sites, we fortunately have partnerships with companies that can. We find solutions for these use cases as well.

Looking for an entry point?

Our approach to AI is no different from how we usually work: approaching the task with human intelligence, focusing on real added value for all stakeholders – and end-users in particular – proceeding iteratively in stages, and remaining technology-neutral. In most cases, it is not clear from the outset that a Large Language Model or Generative AI is the right solution. However, if we conclude that it is the appropriate technology, we have the expertise and experience to implement AI professionally.

We can start with strategic consulting and workshops to analyze processes and identify potential, or with a technical Proof of Concept (PoC). Of course, we’re also happy to jump straight to work on implementing your already fully developed AI app idea.

Overview
Kim Jeker
Kim Jeker
Web Development
Er träumt von den wildesten Backend-, Frontend-, API-, und CMS-Konzepten und wenn man ihn fragt ob die auch realisierbar sind, ist seine Antwort ein simples «Jä». Kein Wunder bei so einer Schatztruhe an Erfahrung!
Carra Tillon
Carra Tillon
Web Development
Wer nach einem Studium in Mechanical Engineering noch das Coden lieben lernt, Programmiersprachen aus den 50er-Jahren kann, belgische Waffeln gegen Raclette eintauscht, unermüdlich erklären mag was “Korfball” ist und Marathon läuft, hat nicht nur viel Ausdauer, sondern ist richtig bad ass.
Alain Stulz
Alain Stulz
Technical Lead iOS Development
iOS-Entwickler mit erweiterten Fähigkeiten. So beherrscht er eigentlich alles, was Strom frisst, hat einen Bachelor of Science der Uni Bern in der Tasche und die Verbesserung unserer Office-Automation stets im Hinterkopf. Unser Ass im Ärmel!
Maximilian Lemberg
Maximilian Lemberg
iOS Development & AI Engineer
Maxi ist Anzettler und sorgt nicht nur an Schnuppertagen für Begeisterung, sondern auch an Teamausflügen, Hackathons, Thirsty Fridays, Basketballspielen - eigentlich an jeglicher Art von Veranstaltung. Dank seinen DJ Sets bleibt sogar an Büro-Einweihungen kein Tanzbein ungeschwungen und das Team motiviert. Oder wie er vermutlich sagen würde: hyped.
Olivier Oswald
Olivier Oswald
CTO | Co-Founder
Seine Haare sind gut versichert und zeugen von IT-Erfahrung aus den Ursprüngen dieser Kunst. Keine technische Herausforderung, die ihm nicht schon begegnet ist. Nicht ohne Grund hat er Guru-Status und kann durch Meditation ein WLAN erzeugen.