One of the most complex challenges when working on any software project is to debug unintended behavior. In {{UBIK}}, there is an inherent structure to every project, which we can exploit for debugging. Let's find out, how.
== Quick-fix check list ==
== A general policy for debugging ==Many issues can be resolved by going through the following check list.
Our immediate goal in debugging is not to fix the issue. Instead, we want to find out why it behaves the way it does. Additionally, we must learn what the designated behavior is. This might be more complex than anticipated originally. Only then, we can change the underlying code or configuration to achieve the desired behavior.We can manifest this insight as a general policy for debugging:# Find out how to reproduce the issue reliably## Ask the reporter how they reproduce it## Test it ourselves Check settings and improve the reproduction if possible# Find the cause configurations for the current behavior## If we get an error messagetypos, we can try to search the internet for it. Maybe somebody else has had the same problem.missing entries and other errors## If this didn't help, we need Restart {{UBIK}} Studio and reconnect to look ourselves.## Try your DB to visualize what steps the algorithm goes through in the codeavoid caching issues## Create a working hypothesis what' going on: "I think what's going on is... !"Check whether all plugins were loaded correctly## Find a good entry point for debugging in In case the custom code## Attach the debugger of an IDE (like Visual Studio) to the process if possible## If this is not possiblewas changed, try to generate log output or add debug output {{UBIK}} was upgraded to the UI a new version:## Inspect Compile and publish the steps that are gone through by the algorithm customizing (either creating log entries, or by stepping through with the debuggerF6)## Inspect Restart the state of involved variables throughout the algorithm (either creating log entries or by looking at the variables with the debugger)Enterprise Service## Now we adapt our hypothesis, optimize the debugging and repeat the process until we learn what is happening.Restart all Web Services# Find out the desired behavior, instead of what is happening currently## In some cases, this is complex. We mustn't be afraid to think this through thoroughly, and ask responsible persons if we are not in case the position to decide it.## Define the functional design (i.e., a suggestion data model for the desired behavior) as clearly and simply as possible.client was changed:# Create a technical design for the solution## If we now Rebuild and publish the designated behavior, we can describe how to achieve it technically.ACM meta definitions using the ACM manager## That mostly means: Restart all web services### Basic idea### What modules are involved?### Changes to Restart the data model### Changes {{UBIK}} client application to the algorithm (i.e., workflow logic)## Define the technical design as clearly make sure new meta definitions and simply as possible.# Implement a fix# Retest the fix using our reproductioncontent are received
This is basically independent of the product or framework you're using. With {{UBIK}}, we can get more concrete.== A general policy for debugging ==
== Debugging {{UBIK}} ==The first step, namely to find a reproduction, stays the same as in the general case described above: Ask, test and refinecan be approached methodically.The general approach to finding the cause, namely by improving your hypothesis and inspecting whatHere's going on, is still valid, tooa basic plan for debugging software.
However# '''Reproduction''': Get all available, there are some considerations we can specify with respect relevant information about the bug and confirm the problem in a test setup# '''Inspection''': Inspect the actual behavior to understand the cause# '''Fix''': Design and implement a solution# '''Retest''': Test the fix == Debugging a {{UBIK}}.project ==
<!-- DO NOT REMOVE THIS -->{{Template:HowTo/Begin}}<!-- DO NOT REMOVE THIS -->
= Visualizing the algorithm =
In order to find out what's going on and to debug efficiently, we must be able to imagine the workflow and architecture of the use-case.
In {{UBIK}}, the behavior of any project's use-case can be distributed across multiple products, i.e., the client application with its UI customizing, and the server products including the database, the Enterprise Service, the {{UBIK}} Web Services, and {{UBIK}} Studio, any Plugins and Server customizing consisting of the data model, configuration objects and custom code.
[[File:IL_Platform_Architecture.png|thumb|The UBIK platform architecture]]
A good next step is to try and find out how the affected use-case is implemented. Some use-cases are very simple, but in many cases, there are quite a few modules and steps involved. We want to answer the questions: Which products and modules were used, and how do they interact?= Reproduction =
==== Full Test System ====
To reproduce the problem with {{UBIK}}, you require a test setup. This usually means creating a local copy of the affected database, and installing the {{UBIK}} products relevant for the problem. It is important to use the same binaries, plugins and versions as in the system where the problem occurred.
Then, we can try to provoke the reported issue in the test setup. This might require getting more information about the issue.
[[Category:Best Practices (internal)|Debug a Customizing]]
[[Category:FAQ|Debugging UBIK]]
[[Category:How-To|Debugging UBIK]]
[[Category:Resources (internal)|Debug a Customizing]]
==== Isolation Testing ====
If a full test setup is not feasible, isolating a (presumably) faulty part and testing it individually often makes sense.
In {{UBIK}} Studio, there are two tools for this:
* Who-Bert Debugging Tool
* View Test Tool
Both can be used to test the behavior of {{UBIK}} objects (and custom code) on the server side.
With Who-Bert code and manually created test data, you can additionally set up a "mock" or "fake" situation, to test the behavior under very specific circumstances.
The View Test Tool simulates how the web service assembles data for the client, ignoring the ACM meta definitions (context, scopes etc.).
Another way to isolation-test your Plugin code is writing [https://en.wikipedia.org/wiki/Unit_testing unit tests], which is strongly encouraged.
[[Category:Best Practices (internal)|Debug a Customizing]]
[[Category:FAQ|Debugging UBIK]]
[[Category:How-To|Debugging UBIK]]
[[Category:Resources (internal)|Debug a Customizing]]
= Inspection =
Once you have a test setup and are able to reproduce the issue, you can inspect what's happening in detail to find out why the problem occurs.
This can be done either by debugging with Visual Studio, or by producing diagnostic output in the form of log entries, {{UBIK}} objects and property values, or UI customizing.
=== Inspect the mobile client ===
* Use the [[Developer_Mode]] to inspect the currently visible view models and their values.
* Inspect the log files of the mobile client, including the web service client log.
=== Inspect the web services or the Enterprise Service ===
* Inspect the log files of the web service or Enterprise Service.
* Modify your plugin or programmatic customizing to output log messages describing the state of your program at critical points.
* Modify your plugin or programmatic customizing to write diagnostic {{UBIK}} objects describing the state of your program at critical points.
* Use a Who-Bert script to test a specific setup and output log messages to the console.
= Hypothesizing =
In order to narrow down the cause of the problem, we can try to formulate an idea of what could have gone wrong. Optimally, we actually go and look for a proof, to see it happen in action, but it's always good to know potential error sources. In general, there are several common types of problems, and from another perspective, a set of common sources for such problems.
=== Visualizing the architecture and algorithm ===
In order to come up with a good hypothesis, you must understand the architecture and algorithm at work.
This means you have to find out which {{UBIK}} products and modules are involved and how the affected use-case is implemented in the project.
[[File:IL_Platform_Architecture.png|thumb|The UBIK platform architecture]]
Nearly all use-cases in {{UBIK}} projects are either related to the mobile client or to interfacing with 3rd party systems. Though the specific implementation can be very different from others, the general flow of information throughout {{UBIK}} modules will almost always be similar. If there is a problem, it has to occur in one of the respective steps, caused by one of the listed dependencies.
In this case, the {{UBIK}} Proxy mechanism is an additional source of complexity; but there's a [[HowTo:Configure_Proxies|separate article]] for that.
= Hypothesizing == Types of problems ===
If we know what our basic algorithm looks like, we can try to formulate an idea what could have gone wrong. Optimally, we actually go and look for a proof, to see it happen in action, but it's always good to know potential error sources. In general, there are several common types of problems, and from another perspective, a set of common sources for such problems.
=== Types of problems ===
==== Performance issues ====
Performance issues can be caused by:
** Network security restriction
** User rights restriction
* Client App
** Erroneous data (unexpected values provoke the problem)
** Wrong configuration (the profile or a configuration object coming from the server is misconfigured)
** UI customizing (some XAML contains an error)
** Core implementation (the app itself has a bug)
* Web Service, Studio or Enterprise Service
** A manual step was forgotten (rebuilding the custom code, releasing the ACM meta definitions, restarting the web service, ...)
** Plugin code (a standard or customer plugin has a bug)
** Custom code (custom code of meta classes or the custom code library has a bug)
* Client App
** Erroneous data (unexpected values provoke the problem)
** Wrong configuration (the profile or a configuration object coming from the server is misconfigured)
** UI customizing (some XAML contains an error)
** Core implementation (the app itself has a bug)
= Inspection = Somehow, we must see what's really going on under the hood. No matter how good your hypothesis, if you can't verify or falsify it, it's no use. Even more frequently, a hypothesis is wrong and you have to come up with a better one - optimally, based on hard facts. How do we get more information about the problem? The keyword is inspection. It means, we have to look at the state of the program, as it performs critical steps in the algorithm. Basically, this means, we want to know:* When the algorithm makes a decision, which decision does it make and why?* Where is the first wrong decision made, and how does it end up in the observable erroneous state? Mostly, this means outputting the current values of variables, the current module and method at a point in the algorithm. It can also mean inspecting the input data or parameters for our algorithm to improve our hypothesis. There are the following ways to inspect the state of a {{UBIK}} system: === Inspect the mobile client ===* Use the [[Developer_Mode]] to inspect the currently visible view models and their values.* Inspect the log files of the mobile client, including the web service client log. === Inspect the web services or the Enterprise Service ===* Inspect the log files of the web service or Enterprise Service* Modify your plugin or programmatic customizing to output log message containing the state of your program at critical points* Use a Who-Bert script to test a specific setup and output log messages to the console. = SolvingFix: Performance Problems =
If you're in the technical design stage, you've already found out the reason for the performance issues. In case of a hardware or infrastructure bottle neck, you can either try to get better circumstances - or adapt to them, optimizing your solution.
=== Leverage strengths ===
Usually, he the server is strong and fast, the mobile device not so much, and the network is a performance graveyard.
If you want to waste as much performance and time as possible, then you try to maximize the amount of network interactions and shift all the workload to the client application.
Vice-versa, leveraging the strengths in {{UBIK}} means to shift all the calculation and preparation to the server and deliver the results in a most compact way to the client in one request-response cycle.
The user has to perform a few additional navigation steps, but on the other hand, they have to make that choice anyway.
We even help them selecting a pair of shoes by leading them through the right choices.
As a nice side-effect, the result consists of much fewer shoes , so it's computationally cheaper to load all the videos.Optimally, the parameters for the filtering can be inferred even without the user inputting them explicitelyexplicitly, e.g., by looking at the wheather weather and the user's calendar (sunny wheatherweather, hiking trip: probably not the rain boots).
Anyway, in some cases the use-case can be rearranged so the amount of data and information presented to the user at one point in time is smaller.
= SolvingFix: Crashes =
As explained in the hypothesizing section, crashes usually happen because of an unhandled exception being thrown by some module.
The real problem is either that the situation shouldn't occur in the first place or that the program cannot deal with that case; maybe it's a buggy dependency or erroneous input data.
= SolvingFix: Faulty data =
For faulty data, we have to find out where it comes from and solve the problem at its source (or as close to it as possible).
The rule of thumb here is: Don't try to cope with the faulty data when processing or showing it. Instead, fix the problem at the source and repair the data by reimporting.
= SolvingFix: Other misbehavior =
Maybe the issue is a simple typo or wrong setting and you can fix the problem with a simple measure. Since you're reading this, the solution might not be so simple and we have to approach it conceptually.
[[Category:Best Practices (internal)|Debug a Customizing]]
[[Category:FAQ|Debugging UBIK]]
[[Category:How-To|Debugging UBIK]]
[[Category:Resources (internal)|Debug a Customizing]]