Changes

HowTo:Debugging UBIK

1,628 bytes removed, 15 February
One of the most complex challenges when working on any software project is to debug unintended behavior. In {{UBIK}}, there is an inherent structure to every project, which we can exploit for debugging. Let's find out, how.
== Quick-fix check list ==
== A general policy for debugging ==Many issues can be resolved by going through the following check list.
Our immediate goal in debugging is not to fix the issue. Instead, we want to find out why it behaves the way it does. Additionally, we must learn what the designated behavior is. This might be more complex than anticipated originally. Only then, we can change the underlying code or configuration to achieve the desired behavior.We can manifest this insight as a general policy for debugging:# Find out how to reproduce the issue reliably## Ask the reporter how they reproduce it## Test it ourselves Check settings and improve the reproduction if possible# Find the cause configurations for the current behavior## If we get an error messagetypos, we can try to search the internet for it. Maybe somebody else has had the same problem.missing entries and other errors## If this didn't help, we need Restart {{UBIK}} Studio and reconnect to look ourselves.## Try your DB to visualize what steps the algorithm goes through in the codeavoid caching issues## Create a working hypothesis: "I think what's going on is... !"Check whether all plugins were loaded correctly## Find a good entry point for debugging in In case the custom code## Attach the debugger of an IDE (like Visual Studio) to the process if possible## If this is not possiblewas changed, try to generate log output or add debug output {{UBIK}} was upgraded to the UI a new version:## Inspect Compile and publish the steps that are gone through by the algorithm customizing (either creating log entries, or by stepping through with the debuggerF6)## Inspect Restart the state of involved variables throughout the algorithm (either creating log entries or by looking at the variables with the debugger)Enterprise Service## Now we adapt our hypothesis, optimize the debugging and repeat the process until we learn what is happening.Restart all Web Services# Find out the desired behavior, instead of what is happening currently## In some cases, this is complex. We mustn't be afraid to think this through thoroughly, and ask responsible persons if we are not in case the position to decide it.## Define the functional design (i.e., a suggestion data model for the desired behavior) as clearly and simply as possible.client was changed:# Create a technical design for the solution## If we know Rebuild and publish the designated behavior, we can describe how to achieve it technically.ACM meta definitions using the ACM manager## That mostly means: Restart all web services### Basic idea### What modules are involved?### Changes to Restart the data model### Changes {{UBIK}} client application to the algorithm (i.e., workflow logic)## Define the technical design as clearly make sure new meta definitions and simply as possible.# Implement a fix# Retest the fix using our reproductioncontent are received
This is basically independent of the product or framework you're using. With {{UBIK}}, we can get more concrete.== A general policy for debugging ==
[[CategoryDebugging can be approached methodically. Here's a basic plan for debugging software. # '''Reproduction''':Best Practices (internal)|Debug Get all available, relevant information about the bug and confirm the problem in a Customizing]]test setup[[Category# '''Inspection''':Resources (internal)|Debug Inspect the actual behavior to understand the cause# '''Fix''': Design and implement a Customizing]]solution# '''Retest''': Test the fix
== Debugging {{UBIK}} ==
The first step, namely to find a reproduction, stays the same as in the general case described above: Ask, test and refine.
The general approach to finding the cause, namely by improving your hypothesis and inspecting what's going on, is still valid, too.
 
However, there are some considerations we can specify with respect to {{UBIK}}.
<!-- DO NOT REMOVE THIS -->{{Template:HowTo/Begin}}<!-- DO NOT REMOVE THIS -->
= Reproduction =
= Visualizing the algorithm === Full Test System ====In order to find out what's going on To reproduce the problem with {{UBIK}}, you require a test setup. This usually means a local copy of the database the problem occurred with and an installation of the {{UBIK}} products relevant for the problem. It is important to debug efficientlyuse the same binaries, plugins and versions as in the system where the problem occurred.Then, we must be able can try to imagine provoke the workflow and architecture of reported issue in the use-casetest setup. This might require getting more information about the issue.
In {{UBIK}}==== Isolation Testing ====If a full test setup is not feasible, the behavior of any project's use-case can be distributed across multiple products, i.e., the client application with its UI customizing, and the server products including the database, the Enterprise Service, the {{UBIK}} Web Services, and {{UBIK}} Studio, any Plugins and Server customizing consisting of the data model, configuration objects isolating a (presumably) faulty part and custom codetesting it individually often makes sense.
[[FileIn {{UBIK}} Studio, there are two tools for this:IL_Platform_Architecture* Who-Bert Debugging Tool* View Test Tool Both can be used to test the behavior of {{UBIK}} objects (and custom code) on the server side.png|thumb|With Who-Bert code and manually created test data, you can additionally set up a "mock" or "fake" situation, to test the behavior under very specific circumstances.The UBIK platform architecture]]View Test Tool simulates how the web service assembles data for the client, ignoring the ACM meta definitions (context, scopes etc.). = Inspection =
A good next step is to try Once you have a test setup and are able to reproduce the issue, you can inspect what's happening in detail to find out how why the affected use-case is implementedproblem occurs. Some use-cases are very simpleThis can be done either by debugging with Visual Studio, but or by producing diagnostic output in many cases, there are quite a few modules and steps involved. We want to answer the questions: Which products and modules were usedform of log entries, {{UBIK}} objects and how do they interact?property values, or UI customizing.
=== Inspect the mobile client ===
* Use the [[Developer_Mode]] to inspect the currently visible view models and their values.
* Inspect the log files of the mobile client, including the web service client log.
 
=== Inspect the web services or the Enterprise Service ===
* Inspect the log files of the web service or Enterprise Service.
* Modify your plugin or programmatic customizing to output log messages describing the state of your program at critical points.
* Modify your plugin or programmatic customizing to write diagnostic {{UBIK}} objects describing the state of your program at critical points.
* Use a Who-Bert script to test a specific setup and output log messages to the console.
 
= Hypothesizing =
In order to narrow down the cause of the problem, we can try to formulate an idea what could have gone wrong. Optimally, we actually go and look for a proof, to see it happen in action, but it's always good to know potential error sources. In general, there are several common types of problems, and from another perspective, a set of common sources for such problems.
 
=== Visualizing the architecture and algorithm ===
In order to come up with a good hypothesis, you must understand the architecture and algorithm at work.
This means you have to find out which {{UBIK}} products and modules are involved and how the affected use-case is implemented in the project.
[[File:IL_Platform_Architecture.png|thumb|The UBIK platform architecture]]
Nearly all use-cases in {{UBIK}} projects are either related to the mobile client or to interfacing with 3rd party systems. Though the specific implementation can be very different from others, the general flow of information throughout {{UBIK}} modules will almost always be similar. If there is a problem, it has to occur in one of the respective steps, caused by one of the listed dependencies.
In this case, the {{UBIK}} Proxy mechanism is an additional source of complexity; but there's a [[HowTo:Configure_Proxies|separate article]] for that.
= Hypothesizing == Types of problems ===
If we know what our basic algorithm looks like, we can try to formulate an idea what could have gone wrong. Optimally, we actually go and look for a proof, to see it happen in action, but it's always good to know potential error sources. In general, there are several common types of problems, and from another perspective, a set of common sources for such problems.
 
=== Types of problems ===
==== Performance issues ====
Performance issues can be caused by:
** Network security restriction
** User rights restriction
* Client App
** Erroneous data (unexpected values provoke the problem)
** Wrong configuration (the profile or a configuration object coming from the server is misconfigured)
** UI customizing (some XAML contains an error)
** Core implementation (the app itself has a bug)
* Web Service, Studio or Enterprise Service
** A manual step was forgotten (rebuilding the custom code, releasing the ACM meta definitions, restarting the web service, ...)
** Plugin code (a standard or customer plugin has a bug)
** Custom code (custom code of meta classes or the custom code library has a bug)
* Client App
** Erroneous data (unexpected values provoke the problem)
** Wrong configuration (the profile or a configuration object coming from the server is misconfigured)
** UI customizing (some XAML contains an error)
** Core implementation (the app itself has a bug)
= Inspection = Somehow, we must see what's really going on under the hood. No matter how good your hypothesis, if you can't verify or falsify it, it's no use. Even more frequently, a hypothesis is wrong and you have to come up with a better one - optimally, based on hard facts. How do we get more information about the problem? The keyword is inspection. It means, we have to look at the state of the program, as it performs critical steps in the algorithm. Basically, this means, we want to know:* When the algorithm makes a decision, which decision does it make and why?* Where is the first wrong decision made, and how does it end up in the observable erroneous state? Mostly, this means outputting the current values of variables, the current module and method at a point in the algorithm. It can also mean inspecting the input data or parameters for our algorithm to improve our hypothesis. There are the following ways to inspect the state of a {{UBIK}} system: === Inspect the mobile client ===* Use the [[Developer_Mode]] to inspect the currently visible view models and their values.* Inspect the log files of the mobile client, including the web service client log. === Inspect the web services or the Enterprise Service ===* Inspect the log files of the web service or Enterprise Service* Modify your plugin or programmatic customizing to output log message containing the state of your program at critical points* Use a Who-Bert script to test a specific setup and output log messages to the console. = SolvingFix: Performance Problems =
If you're in the technical design stage, you've already found out the reason for the performance issues. In case of a hardware or infrastructure bottle neck, you can either try to get better circumstances - or adapt to them, optimizing your solution.
Anyway, in some cases the use-case can be rearranged so the amount of data and information presented to the user at one point in time is smaller.
= SolvingFix: Crashes =
As explained in the hypothesizing section, crashes usually happen because of an unhandled exception being thrown by some module.
The real problem is either that the situation shouldn't occur in the first place or that the program cannot deal with that case; maybe it's a buggy dependency or erroneous input data.
= SolvingFix: Faulty data =
For faulty data, we have to find out where it comes from and solve the problem at its source (or as close to it as possible).
The rule of thumb here is: Don't try to cope with the faulty data when processing or showing it. Instead, fix the problem at the source and repair the data by reimporting.
= SolvingFix: Other misbehavior =
Maybe the issue is a simple typo or wrong setting and you can fix the problem with a simple measure. Since you're reading this, the solution might not be so simple and we have to approach it conceptually.
1,592
edits