Artigo Acesso aberto Produção Nacional Revisado por pares

Automated tests for cross‐platform mobile apps in multiple configurations

2019; Institution of Engineering and Technology; Volume: 14; Issue: 1 Linguagem: Inglês

10.1049/iet-sen.2018.5445

ISSN

1751-8814

Autores

André Augusto Menegassi, André Takeshi Endo,

Tópico(s)

Software Testing and Debugging Techniques

Resumo

IET SoftwareVolume 14, Issue 1 p. 27-38 Research ArticleOpen Access Automated tests for cross-platform mobile apps in multiple configurations Andre Augusto Menegassi, Andre Augusto Menegassi Department of Computing, Federal University of Technology – Parana (UTFPR), Avenida Alberto Carazzai 1640, Cornelio Procopio, Brazil Universidade do Oeste Paulista (Unoeste), R. Jose Bongiovani 700, Presidente Prudente, BrazilSearch for more papers by this authorAndre Takeshi Endo, Corresponding Author Andre Takeshi Endo andreendo@utfpr.edu.br orcid.org/0000-0002-8737-1749 Department of Computing, Federal University of Technology – Parana (UTFPR), Avenida Alberto Carazzai 1640, Cornelio Procopio, BrazilSearch for more papers by this author Andre Augusto Menegassi, Andre Augusto Menegassi Department of Computing, Federal University of Technology – Parana (UTFPR), Avenida Alberto Carazzai 1640, Cornelio Procopio, Brazil Universidade do Oeste Paulista (Unoeste), R. Jose Bongiovani 700, Presidente Prudente, BrazilSearch for more papers by this authorAndre Takeshi Endo, Corresponding Author Andre Takeshi Endo andreendo@utfpr.edu.br orcid.org/0000-0002-8737-1749 Department of Computing, Federal University of Technology – Parana (UTFPR), Avenida Alberto Carazzai 1640, Cornelio Procopio, BrazilSearch for more papers by this author First published: 01 February 2020 https://doi.org/10.1049/iet-sen.2018.5445Citations: 1AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract Cross-platform apps stand out by their ability to run in various operating systems (OSs), such as Android, iOS, and Windows. Such apps are developed using popular frameworks for cross-platform app development such as Apache Cordova, Xamarin, and React Native. However, the mechanisms to automate their tests are not cross-platform and do not support multiple configurations. Hence, different test scripts have to be coded for each platform, yet there is no guarantee they will work in different configurations varying, e.g. platform, OS version, and hardware available. This study proposes mechanisms to produce automated tests for cross-platform mobile apps. In order to set up the tests to execute in multiple configurations, the authors' approach adopts two reference devices: one running Android and other iOS. As both platforms have their own user interface (UI) XML representation, they also investigated six individual expression types and two combined strategies to locate UI elements. They have developed a prototype tool called cross-platform app test script recorder (x-PATeSCO) to support the proposed approach, as well as the eight locating strategies considered. They evaluated the approach with nine cross-platform mobile apps, comparing the locating strategies in six real devices. 1 Introduction Currently, mobile devices permeate the daily life of most people and are available in various formats, mostly as smartphones, tablets, and wearables. They are equipped with powerful processors, large storage capacity, and several sensors [1]; modern operating systems (OSs) control the hardware of those devices. In a survey from International Data Corporation (IDC) [2] about the market share of mobile OSs, Android and Apple iOS were the most consumed platforms in the first quarter of 2017, with 85 and 14.7%, respectively. Other surveys of Gartner [3] brought the sales of smartphones in the first quarter of 2017; the Android platform (86.1%) was the market leader, followed by the Apple iOS platform (13.7%). Such OSs serve as a platform for executing a wide variety of software applications called mobile apps. The Statista site [4] offers statistics about the number of apps available for download in the main distribution stores: Android has the largest number of apps available to its users with 2.8 million apps and Apple iOS has 2.2 million apps. The development of mobile apps can be classified into three groups: native apps, browser-based Web apps, and hybrid apps [5]. Native apps are developed using the mobile OS Software Development Kit (SDK), taking full advantage of the device functions as well as the OS itself. Web apps are developed with technologies to build software for the Web such as HTML5, CSS3, and Javascript [6]. They are stored on a Web server, run under a client browser, and do not have access to advanced features of the mobile OS. Finally, hybrid apps combine Web technologies such as HTML5, CSS3, and Javascript (using a native component known as WebView) with plugins that access the OS native application programming interfaces (APIs). Commercial and open source frameworks support the development of hybrid apps, such as Apache Cordova (https://cordova.apache.org), PhoneGap (http://phonegap.com), Sencha Touch (https://www.sencha.com), IONIC (http://ionicframework.com/), and Intel XDK (https://software.intel.com/xdk). Unlike native apps, hybrid apps have the advantage of executing across multiple platforms. Cross-platform apps have gained momentum due to their ability to be built for different OSs, reducing the need for specific-platform code. Such an app has a common code base, though platform-specific builds are needed. [When mentioned that a cross-platform app is run, this study means that it is a proper platform-specific build of the app.] Other approaches to cross-platform development are frameworks such as React Native (https://facebook.github.io/react-native) and Xamarin (https://www.xamarin.com). They support the creation of apps with native user interface (UI) elements using programming languages such as Javascript and C#. The final product is a native and cross-platform app [7-9]; in this work, we adopt the term native cross-platform for this type of app. Cross-platform app testing is challenging due to the variability of device settings and mobile OSs on the market [1, 10, 11]. As testing the app in a single device does not guarantee the correct operation in others [12, 13], each device represents a configuration (platform, hardware, screen size, sensors etc.) that needs to be verified. While automation is essential to cover many configurations, the current test mechanisms are not cross-platform. For instance, a UI test script using a tool such as Appium (http://appium.io/) has to be written twice since the UI's XML representations of Android and iOS are different. Such representations may also be different between versions of the same platform, such as in Android 4 and Android 6 [14]; this implies that two or more scripts might be needed for different versions of the same OS. The maintenance of UI test scripts has been known to be a costly task [15-17], and cross-platform apps aggravate it by requiring two or more versions of the same test script. Existing tools and research on automated testing have focused on specific platforms [18]; however, there is a lack of approaches for automated testing of cross-platform apps. This study introduces an approach to generate scripts to test cross-platform mobile apps in multiple configurations. The approach relies on a reference device for each OS; currently, we have worked with the Android and iOS platforms. As it focuses on black-box testing at the system level, strategies to locate UI elements are also investigated. In particular, we analysed six individual expression types and two combined strategies. The approach and the locating strategies have been implemented in a prototype tool named cross-platform app test script recorder (x-PATeSCO). To evaluate the approach and compare the locating strategies, we conducted an experimental study with nine cross-platform mobile apps tested on six real devices (multiple configurations). In summary, this study has the following contributions. We describe the difficulties of implementing automated tests for cross-platform apps for different OSs such as Android and iOS (Section 2). We introduce an approach that supports the generation of scripts capable of testing cross-platform apps in several configurations. The approach is supported by the investigation of six individual XPath expressions and two combined strategies (Section 3). We present x-PATeSCO, an interactive tool that automates the use of the proposed approach (Section 4). We describe an extensive experimental evaluation of the proposed approach with nine open source and industrial apps. In the experiments, we evaluate the effectiveness and the performance of the proposed approach and associated locating strategies (Section 5). This paper is structured as follows: Section 2 motivates the research problem. Section 3 introduces an approach to test cross-platform mobile apps. Section 4 describes the prototype tool and the experimental evaluation is shown in Section 5. Section 6 discusses on the obtained results. Section 7 brings the related work. Finally, Section 8 concludes the paper and sketches of future work. 2 Research problem This section exemplifies the research problem with a hybrid app developed with Apache Cordova. Nevertheless, this problem is shared by all cross-platform apps, even the ones developed with native cross-platform frameworks such as Xamarin and React Native. UIs of a hybrid app are built using HTML elements, which are interpreted and transformed into an XML structure by the mobile platform. This structure differs between the platforms, as shown in Fig. 1. Fig. 1Open in figure viewerPowerPoint Android and iOS UI representations Notice that HTML element of the Fresh Food Finder app (https://github.com/triceam/Fresh-Food-Finder) has two representations, one presented by Android and other by iOS. Such a structure might differ even within the same platform. Table 1 presents a brief mapping between HTML and XML native UI elements generated by the Android and iOS platforms. Table 1. HTML to XML native UI elements HTML element type Android element iOS element input button android.widget.Button UIAButton input submit android.widget.Button UIAButton Div android.widget.View UIAStaticText Span android.widget.View UIAStaticText Label android.widget.View UIAStaticText Select android.widget.Spinner UIAElement input text android.widget.EditText UIATextField Textarea android.widget.EditText UIATextField a (anchor) android.widget.View UIALink XML nodes are composed of key attributes that contain descriptive information of UI elements; some of them are viewed by the app users. For the Android platform, such key attributes are 'content-desc' and 'text', while for iOS the attributes are 'label' and 'value'. Element identifier attributes are also available, namely 'resource-id' for Android and 'name' for iOS. Element selectors are used to automate UI test cases of mobile apps. A test case consists of test input values, execution conditions, and expected results, designed to achieve a specific objective [19]. Selectors are 'patterns' or 'models', which provide mechanisms to locate elements (or nodes) in computational structures, such as XML or HTML [20]. After selecting a UI element, the tester can programme an action or verify a property. A well-known mechanism for selecting XML elements are query expressions in XPath, a query language for selecting elements (nodes) in computational structures which represent XML documents [21]. As shown in Fig. 1, the platform manufacturers do not follow a common standard to represent UI elements. Such differences reflect negatively on the test activities for cross-platform apps. The app is developed with features that enable its cross-platform execution, but the mechanisms to test it are not. Therefore, different test scripts are required to automate the testing of the same UI, each one with appropriate selectors. For example, the XPath selectors required to select the highlighted element in Fig. 1 (button 'Search for a market') suitable for Android 6.0.3 is '//*/android.view.View[@content-desc='Search For a Market']' and for iOS 9.3 is '//*/UIAStaticText[@label='Search For a Market']'. These XPath queries use the element type and key attributes that are platform-specific. Only the attribute values remained the same in both platforms. At least, the tester has to identify and code two selectors to test the same UI element. Yet, there is no guarantee that the automated test will work for other configurations, with different OS version, hardware manufacturer, sensors, and so on. To improve the testing of cross-platform apps, we aim to propose an approach to record and generate a single test script capable of performing in different configurations. 3 Approach overview Our approach defines a mechanism to automate the testing of cross-platform apps by constructing a test script to be run in multiple configurations. We based on the insight that using a reference device for each platform, a robust automated test can be produced by investigating and combining individual expressions. To do so, we propose an approach divided into three main steps, illustrated in Fig. 2. First, a reference device is chosen for each platform; in this work, we have one running Android , and other running iOS (Fig. 2a, Section 3.1). Second, we define an event-driven model to represent the test cases and six individual expressions to locate UI elements were investigated (Fig. 2b, Section 3.2). Third, it proposes a single test script generation and defines two strategies that combine individual expressions in a more robust setting (Fig. 2c, Section 3.3). The three steps are detailed in the following subsections. Fig. 2Open in figure viewerPowerPoint Approach overview 3.1 Device selection This step consists of selecting a mobile device for each platform, namely one Android and other iOS. Testers might take several aspects into account to decide such as popularity, availability, final users, and so on. Some studies suggest choosing mobile devices for app testing based on their general popularity among end users [22, 23]. The selection of reference devices might also come from an existing demand; for instance, specific apps from a given business organisation run in a controlled set of devices. While different criteria may be defined and applied to select the devices, any pair of reference devices can be used. The selected devices are then referred to as reference configurations: and . In addition, a cross-platform mobile app under test (AUT) should be installed on those devices. 3.2 UI element selection and test case definition This step defines a model to express the test cases. In particular, we want to represent the expected event sequence of the user's interactions with the AUT. We adopted an event sequence graph (ESG) [24] to model the test case as a sequence of UI events (nodes) connected by edges; two ESGs are shown in Fig. 3. For each event, a UI element is selected and some action is executed (e.g. a click or an input text). This step is reproduced in the reference devices and two compatible ESGs are generated. The ESG compatibility is related to both models with the same number of events and data/action provided for each element. Fig. 3 shows a generic example of a test case; there is one ESG for each platform: Android and iOS, respectively. The ESG represents a test case and each node () is an element with data needed for its test. For the example and all test cases used in the experiment (Section 5), the sequence of UI events was the same in both platforms. Fig. 3Open in figure viewerPowerPoint ESGs modelling the events under test During the UI element selection, the XML structure of app's UI is extracted. Based on this structure, each UI element has its type identified (textbox, button, anchor etc.), as well as its key and identified attributes with its respective values are stored. This data supports the construction of different XPath expressions to locate the element for the event under test. This study investigates six individual expressions: AbsolutePath and IdentifyAttributes represent the state-of-practice for UI selectors; CrossPlatform is a tentative to provider a unique selector for both platforms; ElementType, AncestorIndex, and AncestorAttribute are based on experience-based guidelines to have more generic and robust XPath expressions. The six expressions are described along with examples for the UI in Fig. 1, as follows. AbsolutePath: It is a platform-specific expression, based on the absolute path from the root to the given element. In some cases, indexes are required to identify the element position within the XML structure. This expression has been employed in Web application testing to find elements in the document object model (DOM) structure [25]. . It is a well-known alternative when the element has no identifier attribute. An example is shown as follows: IdentifyAttributes: It is an expression based on the values of attributes that identify the element, such as resource-id for the Android platform and name for the iOS platform. Such expression is also well-known for Web applications [25, 26]. Attribute id is one of the primary strategies to locate HTML elements. An example is shown as follows: CrossPlatform: We propose it to define a single expression for different platforms. Such an expression is prepared to select a particular element from the app's UI independent of its execution platform. It combines key attributes (Android: content-desc or text and iOS: label or value) of elements and their values on both platforms, as discussed in Section 2. Rao and Pachunoori [27] suggest the use of expressions that combine attributes when the identifier attribute is not available. In our study, we combine the attributes of the two platforms in a single expression. An example is shown as follows: ElementType: It is an expression to find an element based on the combination of its type and key attributes (Android: content-desc or text and iOS: label or value) or platform-specific identifier attributes (Android: resouce-id or iOS: name). Rao and Pachunoori [27] also suggest combining element type in XPath expressions. We include the element type, prioritising the combination with the key attributes because they are more common than the identifier attributes. An example is shown as follows: AncestorIndex: It is a platform-specific expression based on the index of the desired element contained within its ancestor element. The index defines the exact position of the element, in this case, inside the ancestor container. This expression is hybrid since the container is located by a relative query expression (based on key and identifier attributes, when available) and the inner element is located by the index (absolute positioning). This expression might help to find the element when attribute-based expressions fail due to the dynamic changes; each run produces different attributes' valuation. An example is shown as follows: AncestorAttribute: It is similar to the previous expression, but the index is replaced by a location based on the values of key attributes. An example is shown as follows: A type of expression is not always applicable on all platforms and its versions. Each platform can present a different UI XML structure and different attributes; this impacts on selecting elements to test. Besides the CrossPlatform expression, each expression has a platform-specific version. The single test engine, described in the next section, identifies the platform and selects the appropriate expression. 3.3 Single test engine This step establishes a common mechanism for automated UI testing of cross-platform apps in multiple configurations. The test case represented by an ESG is the basis since each event contains the UI element's data and its query expressions to select it. As the aforementioned individual expressions might be limited in some contexts and only applicable in some cases, we introduce two locating strategies that combine the six expressions. The purpose of this combination is that one expression may compensate for the weaknesses of the other, providing an overall and more robust UI element selection strategy. ExpressionsInOrder: Expressions are sorted by their type and executed sequentially. If the first expression fails, the next one is executed, and so on. The strategy aims to avoid the incomplete execution of a test case due to an element not found error. We compare this strategy with conventional individual expressions in Section 5. The order we defined prioritises relative expressions, starting with the CrossPlatform expression due to its suitability to select UI elements in Android and iOS. Absolute expressions have low priority since some studies indicate fragility in element localisation [15, 16, 27]. The sequence order set up is: CrossPlatform ElementType IdentifyAttributes AncestorAttributes AncestorIndex AbsolutePath. Algorithm 1 (see Fig. 4) presents pseudocode for the proposed strategy. ExecInOrder method (line 2) receives, as parameters, a set of six XPath expressions, and locates an element in the XML UI structure of app using the expressions (lines 9–14). In the end, the found element (by one of the expressions) is executed according to the action indicated by the tester (line 18). When the element is not found by any of the expressions, an exception is thrown to notify the test runner that the test case cannot proceed (lines 15 and 16). The general idea of the algorithm was defined in this work. Fig. 4Open in figure viewerPowerPoint Algorithm 1: ExpressionsInOrder algorithm ExpressionsMultiLocator: In this strategy, all expressions are executed and the element is selected by voting criteria. These criteria are proposed based on reliability strategies to determine weights for each type of expression. This strategy was adapted from Leotta et al. [16], which employed it to Web application testing. As this strategy showed promising results for Web applications [16], we adapted it for cross-platform apps and compared with the other strategies in Section 5. Table 2 lists the weights for each expression. As follows, we discuss how the expressions' weights were defined: XPath expressions based on attribute values displayed in the UI have a high weight, due to the constancy in maintaining their values and independency of indexes and/or absolute paths. Examples of these attributes are content-desc, text, label and value. The expressions in this group are ElementType, CrossPlatform, and AncestorAttributes. Table 2. Expression types and weights Expression type Reliability Weight CrossPlatform high 0.25 ElementType high 0.25 IdentifyAttributes medium 0.15 AncestorAttribute medium 0.25 AbsolutePath low 0.05 AbsoluteIndex low 0.05 XPath expressions based on identifier attribute values, such as resource-id (Android) and name (iOS) are the next ones. In some app development frameworks, the values of identifier attributes change in different executions, thus the selection reliability weakens. An example of this type of expression is IdentifyAttributes. XPath expressions based on absolute paths or indexes (positioning) have the smallest weights. They have low confidence due to their fragility during the software evolution [15, 16, 27]. Examples of these expressions are AbsolutePath and AncestorIndex. Algorithm 2 (see Fig. 5) presents pseudocode for the ExpressionsMultiLocator strategy. As in the previous combined strategy, the ExecMultiLocator method (line 2) receives, as parameters, a set of six XPath expressions. For each element found, its weight is extracted according to the current expression type. An element returned by different expressions has the weights accumulated. In the end, the element with the highest voting (weight sum) is used in the test case execution (lines 24 and 25). When the element is not found by any of the expressions, an exception is thrown to notify the test runner that the test case cannot proceed (lines 21 and 22). Fig. 5Open in figure viewerPowerPoint Algorithm 2: ExpressionsMultiLocator algorithm 4 Tool implementation The approach has been implemented in a prototype tool called cross-platform app test script recorder (x-PATeSCO). The tool is based on Appium, an open source framework to automate tests in native, Web or hybrid apps. In addition, Appium is a cross-platform and makes it possible to automate tests for iOS and Android platforms, using a Selenium WebDriver API. x-PATeSCO architecture is illustrated in Fig. 6. The tool uses Selenium WebDriver API (http://www.seleniumhq.org/projects/webdriver/) and an Appium server to connect to two devices, one Android and other iOS ( and ), and send automation commands to the app's UI. The events specified by the tester are recorded and the tool automatically extracts and parses the UI's XML to build the test script using the appropriate XPath expressions (Section 3). The tool also generates a test project for Microsoft Visual Studio encoded in C# with support for the unit testing framework (https://msdn.microsoft.com/en-us/library/ms243147(vs.80).aspx). The project contains the classes that represent the automated test cases, modularised to help the testers in the test activities. Fig. 6Open in figure viewerPowerPoint Tool architecture Fig. 7 shows a screenshot of x-PATeSCO; it offers functionalities to check the UI elements, the definition of its actions (click or text input), and test script generation. The first column provides fields to set up a remote connection with the Appium server and mobile devices. The second column shows a visual mechanism for elaborating the test case, providing pieces of information related to the app's UI. Column three brings the expressions that were automatically constructed by the tool based on the available data of the UI XML structure. Finally, column four highlights in an AUT UI screenshot, which is the UI element selected. Fig. 7Open in figure viewerPowerPoint X-PATeSCO tool Fig. 8 illustrates an excerpt of a test script generated by the tool; in this case, the ElementType expression was selected by the tester. Lines 4–26 configure the connection with the Appium server, handling specific parameters for each platform. For each event, a method is created and invoked (lines 30–34); each method uses one of the eight strategies (proposed in Section 3) in accordance with the tester's will. Such methods contain the appropriate XPath expressions to select an element (lines 42–49); in this case, the ElementType strategy was adopted. Then, lines 53–55 execute the expression and fire the recorded action. Fig. 8Open in figure viewerPowerPoint Test script generated by x-PATeSCO We expect that the tool will help developers and testers to implement more robust test scripts for cross-platform apps. The tool can fit in different testing processes. For instance, textual test cases for the cross-platform app may be provided and a tester/developer needs to automate them in scripts. In this scenario, x-PATeSCO can support the recording and generation of test scripts. Other case is that the produced scripts are compatible with Appium and might be used in cloud test environments, such as Amazon Device Farm (https://aws.amazon.com/en/device-farm), Bitbar (http://www.bitbar.com/testing), and TestObject (https://testobject.com). These services offer a large number of real devices that can be connected and used to test cross-platform mobile apps. Finally, the tool may be used by testers with low experience on cross-platform app testing. Tests are all recorded by means of its UI (see Fig. 7), so x-PATeSCO has a mild learning curve. The x-PATeSCO tool is available as an open source project in [28]. The overall application of the tool and its locating strategies are analysed in the next section. 5 Evaluation To evaluate the proposed approach, it is necessary to understand the performance of the locating strategies, to analyse their behaviour with different apps, and to verify the results obtained from different configurations. Therefore, we conducted an experimental evaluation to compare the eight locating strategies: six individual expressions (Section 3.2) and two combined strategies (Section 3.3). The following research questions (RQs) have been investigated: RQ1: How effective are the locating strategies to test cross-platform mobile apps in multiple configurations? RQ2: How do the locating strategies perform with respect to execution time? RQ1 aims to compare the effectiveness of the locating strategies through the analysis of how applicable are such expressions and if tests based on them can be executed successfully in different configurations. First, we measure their applicability to observe in how many events each strategy can be used to select UI elements. The executability was taken into account to see how successful (to select an element at runtime) a strategy might be. Then, we analyse executability at event and test case levels. As for RQ2, we aim to analyse how the locating strategies might influence the execution time of test cases. Then, we measure the CPU time in seconds of events successfully executed using a given locating strategy. 5.1 Experimental objects and procedure To answer the RQs we proceeded as follows. We selected a set of cross-platform apps, two industrial apps, and seven samples apps to run on Android an

Referência(s)
Altmetric
PlumX