Sunday, July 19, 2015

Very bad network simulation for testing of mobile applications [PART 2]

In the previous post we talked about the need for platform independent, scriptable solution for testing of your mobile applications in a poor internet conditions. To complement the theory with something executable, this post will introduce scripts (Debian Linux like), and a guide to setup your own WIFI access point, which would simulate slow, unreliable mobile internet. You will be able to connect with your Android, iOS, Windows, whatever devices and see from your office, how your apps adapt.

This tutorial will be divided into following sections:

  1. Failure when firstly attempting to solve this problem.
  2. Obtaining the right USB WIFI dongle.
  3. Tutorial for creating an AP from your Linux based workstation.
  4. Script for changing the Quality of service (QoS) characteristics of your AP.
  5. Script for setting particular QoS, simulating GRPS, EDGE, 3G, LTE, whatever networks.
  6. Example usage

Failure, when firstly attempting to solve this problem.

My first attempt did not end up successfully. I am not saying it is a wrong way, but I was just not able to go this way. The plan was to:

  • Buy WIFI dongle with ability to be in AP mode.
  • Virtualize OpenWRT (a small Linux distro, usually run on routers) in the VirtualBox.
  • Install on that virtual machine a Cellular-Data-Network-Simulator - which is capable running on OpenWRT, and is established on well known technologies: tc, iptables and CloudShark.
  • Connect with devices to that AP, and use CloudShark to sniff the network in order to see particular packets.
It looked promising. It would be just an integration of already existing parts, not reinventing a wheel. A fairy tail. It worked. Even when I found out that it would require some work to do in order to script the way, how the devices are connected to the Cellular-Data-Network-Simulator, and the way how the QoS characteristics are changed in order to switch among 2G, 3G, etc. networks. But it was nothing impossible to overcome. The biggest problem which I encountered after I set it up, was the stability of the AP. It switched off the WIFI dongle at random intervals. I studied various OpenWRT log files, but have not found the root cause, hence I was not able to fix it. I needed to think out a different way. Following describes my second attempt, which finally worked.

Obtaining the right USB WIFI dongle.

First things first. Before buying the WIFI dongle, checkout its chipset, and see, whether it is supported by some Linux driver. I am using TP-Link TL-W22N. Its chipset AR9271 is supported by ath9k_htc driver.

Tutorial for creating an AP from your Linux based workstation.

Next you will need to setup various things properly: hostapd, DHCP server, firewall. I followed this great post (automated in the install script for Debian like systems here). In that install script, you can also spot a part (wifi_access_point), which would enable you to start the AP as a service.

Script for changing of Quality of service (QoS) characteristics of your AP.

Now, you should be able to connect to the created AP with your devices. It should provide you a similar Internet connection quality as you have on your workstation. To simulate various cellular data networks we need to limit it somehow.

Following script does it by setting various firewall rules. You will need to alter it a bit before using it.
  1. Set IF_IN to network interface name which is dedicated to the created AP.
  2. Set IF_OUT to network interface name by which is your workstation connected the Internet.
  3. Set IP_IN to a IP address space which will be assigned to your connected devices (you chose this when setting up a DHCP server).
  4. Set IP_OUT to the IP address of your application backend server.
Save the following script, and named it e.g.
#  tc uses the following units when passed as a parameter.
#  kbps: Kilobytes per second 
#  mbps: Megabytes per second
#  kbit: Kilobits per second
#  mbit: Megabits per second
#  bps: Bytes per second 
#       Amounts of data can be specified in:
#       kb or k: Kilobytes
#       mb or m: Megabytes
#       mbit: Megabits
#       kbit: Kilobits
#  To get the byte figure from bits, divide the number by 8 bit

# Name of the traffic control command.

# The network interface we're planning on limiting bandwidth.

# IP address of the machine we are controlling
IP_IN=     # Host IP
IP_OUT= #the address of your backend server

# Filter options for limiting the intended interface.
U32_IN="$TC filter add dev $IF_IN protocol ip parent 1: prio 1 u32"
U32_OUT="$TC filter add dev $IF_OUT protocol ip parent 2: prio 1 u32"

start() {
    ping -c 1 $IP_OUT >/dev/null 2>&1
    if [ $? -ne 0 ]; then
 echo "Error:"
        echo "The IP address: $IP_OUT is not reachable!"
 echo "Check out the backend server address!"
 exit -1

    $TC qdisc add dev $IF_IN root handle 1: htb default 30
    # download bandwidth
    $TC class add dev $IF_IN parent 1: classid 1:1 htb rate "$1"
    $U32_IN match ip dst $IP_IN/24 flowid 1:1
    # in delay
    $TC qdisc add dev $IF_IN parent 1:1 handle 10: netem delay "$3" "$4" distribution normal
    # in packet loss
    $TC qdisc add dev $IF_IN parent 10: netem loss "$7" "$8"

    # upload bandwidth
    $TC qdisc add dev $IF_OUT root handle 2: htb default 20
    $TC class add dev $IF_OUT parent 2: classid 2:1 htb rate "$2"
    $U32_OUT match ip dst $IP_OUT/32 flowid 2:1
    # out delay
    $TC qdisc add dev $IF_OUT parent 2:1 handle 20: netem delay "$5" "$6" distribution normal
    $U32_OUT match ip dst $IP_OUT/32 flowid 20:

stop() {

# Stop the bandwidth shaping.
    $TC qdisc del dev $IF_IN root
    $TC qdisc del dev $IF_OUT root

show() {

# Display status of traffic control status.
    echo "Interface for download:"
    $TC -s qdisc ls dev $IF_IN
    echo "Interface for upload:"
    $TC -s qdisc ls dev $IF_OUT


case "$1" in

    if [ "$#" -ne 8 ]; then
        echo "ERROR: Illegal number of parameters"
        echo "Usage: ./ start [downloadLimit] [uploadLimit] [inDelayMax] [inDelayMin] [outDelayMax] [outDelayMin] [packetLossPercentage] "
        echo "[downloadLimit]  See man page of tc command to see supported formats, e.g. 1mbit."
 echo "[uploadLimit] The same as for downloadLimit applies here."
 echo "[inDelayMax] Max delay in miliseconds for requests outgoing from AP."
 echo "[inDelayMin] Min in delay."
 echo "[outDelayMax] Max Delay in miliseconds for requests outgoing to servers."
 echo "[outDelayMin] Min out delay."
 echo "[packetLossPercentage] The percentage of packet lost"
 echo "Example: / 1mbit 1mbit 50ms 20ms 30ms 10ms 5%"
        exit -1

    echo "Starting  shaping quality of service: "
    start $2 $3 $4 $5 $6 $7 $8
    echo "done"


    echo "Stopping shaping quality of service: "
    echo "done"


    echo "Shaping quality of service status for $IF_IN and $IF_OUT:"
    echo ""


    echo "Usage: {start|stop|show}"


exit 0

Script for setting particular QoS, simulating GRPS, EDGE, 3G, LTE, whatever networks.

Now when you have a script to limit the QoS characteristic of your created AP, you will need to do some measurements, in order to have a clue what bandwidth, what latency, and what packet loss various cellular data networks have. You will need to find out a way how to measure these characteristics in the environment where your customers use your application.

The reason is that the same data network type (e.g. 3G) can have different QoS characteristics on different places. There are other factors in play as well: mobile Internet provider, hour of the day, city vs. village, weather and like. For the measurement I used handy mobile applications (for bandwidth and latency) and Fing for a double check up, as it is able to ping any server you like.

Off-topic: Would not be awesome, if there is a web service which would give me the average QoS characteristics of any place in the world for a particular hour of the day, for a particular data carrier, for particular weather and other conditions? I submitted a bachelor thesis assignment, but so far no enrollment :) And it would be IMO quite easy to setup: a mobile application with gamification characteristics to find out the particular statistics, store them, and make them available via some REST endpoints.

Save the following script as, into the same directory as previous script was saved. The measured values, you can see, are valid for average morning in Brno, Czech republic. The average was made after one week of measuring.


echo -n "Shaping WIFI to "

case "$1" in

 echo "GRPRS"
 $QOS stop > /dev/null 2>&1
 $QOS start 80kbit 20kbit 200ms 40ms 200ms 40ms 5%

 echo "EDGE"
 $QOS stop > /dev/null 2>&1
 $QOS start 200kbit 260kbit 120ms 40ms 120ms 40ms 5%

 echo "HDSPA"
 $QOS stop > /dev/null 2>&1
 $QOS start 2400kbit 2400kbit 100ms 100ms 100ms 100ms 5%

 echo "LTE"

 echo "FULL"
 $QOS stop > /dev/null 2>&1

 echo "DISABLED"
 $QOS stop > /dev/null 2>&1
 $QOS start 1kbit 1kbit 5000ms 5000ms 5000ms 5000ms 5%



exit 0

Example usage

So if you followed the steps, you should be able now to:
  1. Start the AP by: service wifi_access_point start.
  2. Simulate e.g. EDGE, by issuing: EDGE
Ideas have no limits. Use these scripts to e.g. stress network test your application (write a bash script which would randomly switch among all types of the network in a random intervals), or use WireShark to go deeper, to see actual packets being transmitted. Your development team would love you, if you attach to your bug report a saved transmission with a packet level information. Fixing of tough, non-deterministic network issues becomes more easy.

Disclaimer: I am still improving the scripts, use them on your own danger :) Any feedback on how you utilized these scripts, or your improvements would be deeply appreciated.

Thursday, June 25, 2015

Very bad network simulation for testing of mobile applications [PART 1]


Mobile internet is a must for smartphones. Most of the apps are somehow connected to the server, syncing every now and then. Whether it is to just show an advertisement, syncing your local changes with your profile somewhere in the cloud or maybe protecting the app from being distributed as cracked one, without paying for it.

But there is also another category of mobile applications, which heavily depend on the Internet connection. One example of such are applications intended for communication. Let's consider for instance PhoneX app.

All its features (secure calls, secure messaging and secure file transfer) require a decent Internet connection to work. But it just begins with its main features, everything from authentication, through contacts presence and server push notifications establish TCP or UDP connections with the servers.

Disadvantages of traditional way

With such applications, QE teams have to devote non-trivial effort to test applications functionality under various network conditions. There are various ways how to simulate real user conditions. Firstly, one can buy SIM cards for all of his devices, enable mobile data and spend lot of time with travelling around city. This method makes the testing environment the most real one, but one has to consider its downsides as well:

  • Out of reach of your computer, its more difficult to automate some of the app routines while you are moving, to offload mundane repeating of interactions with the app. In your office, it would be more easy to setup a script or to write a functional test which would send 200 subsequent messages or so.
  • Quality of service statistics vary around the globe significantly. And you do not have to go so far. For example 3G bandwidth, latency and jitter is quite different in two towns not far away from each other (100 Km). Needles to say that some places can only dream about LTE, and that these QoS characteristics vary also according to day hour (you would not like testing at 1 AM somewhere in the public transportation). Simulating all these different conditions in laboratory would be indeed more efficient.
  • It would be more difficult to intercept the communication e.g. with WireShark. It is sometimes handy, when developers need to see actual transmitted packets, in order to fix the issue.
  • It is more reliable to save mobile system logs such as logcat on Android right to the computer. Do now know why, but it is often the case for me, that some of the logs are missing when saving them to the file on the device (maybe some buffer limitation, who knows). I found more reliable to have phones connected to the computer and save such logs right away there.
  • Total lost of connection, or lost of some of the packets is more easily to be scripted in your testing laboratory, then in the real world.
  • Users use also various WIFI APs, which restrictions (e.g. isolation of clients) can badly affect your application features.
  • The most obvious reason is the time spent while moving out there, comparing with the time spent in the comfort of your air conditioned office furnitured with the most ergonomic seats out there.
For sure there are other reasons, why I consider simulating of poor internet connection to be done in the laboratory as better option than trying to reproduce the bugs outside. Please, I am not saying that it can substitute all testing while you are moving with the device. I am just saying that it can replace most of the testing under various network conditions.

Next part

In the next part we we will look into how to setup a WIFI Access Point, and some scripts which would enable simulating of poor internet connection. iOS platform has a solution for this already (Settings -> Developer -> Network Link Conditioner). Our solution would be platform independent, and would solve all of the disadvantages described above. Stay tuned.

Tuesday, June 2, 2015

Recording tests on Android (neither root, nor KitKat required)

A test suite is good only when provides a good feedback. Testing mobile apps is cumbersome, and far from robust (actually all UI tests are like that). A meaningful test report is inevitable. That is why, I really like to have executions of my tests recorded. Such recordings are great thing to avoid a repeated execution of the test to find out why it failed (repeated execution of tests should be avoided as plague).

It is awesome that Google added a a native support for recording of your Android 4.4.x+ device screen, but what the other folks with lower Android versions. We can not afford to test only on 4.4+, as it is wise to support at least 4.0+. A rooted device is not the answer for me, as we need to test on real devices, devices which are actually used by our customers.

OK, all Android versions are capable to take a screen capture, so why not to use this feature. The following describes small bash scripts, which in simple words create a video (actually a .gif with 2fps) from such screen captures. It is then easy to use them to record your functional UI tests (showcased on Appium tests).

Firstly, the script which takes screenshots until not terminated into a specified directory on your device:

adb -s $1 shell rm -r $DIR > /dev/null 2>&1
adb -s $1 shell mkdir $DIR > /dev/null 2>&1
for (( i=1; ; i++ ))
 name=`date +%s`
 adb -s $1 shell screencap -p "$DIR/$name.png"
You can try it by executing ./ [serialNumberOfDevice].

Secondly, the script which retrieves taken screenshots from devices into your computer, re-sizes them into smaller resolution, and finally creates an animated .gif:

mkdir "$1"
cd "$1"
adb -s $1 pull $DIR_REMOTE
echo "Resizing screenshots to smaller size!"
mogrify -resize 640x480 *.png
echo "Converting to .gif."
convert -delay 50 *.png "$1"-test.gif
echo "Clearing..."
cp "$1"-test.gif ..
cd ..
rm -rf "$1"
Try it by executing ./ [serialNumberOfDevice] [pathToDirectoryIntoWhichSaveScreenshots]. Just to note that it uses the imagemagic and its sub packages.

Here is an example of .gif created by scripts above, while sending encrypted files through the PhoneX app for a secure communication:
So we have some scripts to execute (indeed there are things to improve, a parameter checking etc.). There are various ways how to use them in your tests, all depend on what testing framework you are using, and in what language your tests are written in. We use the Appium, and its Java client. Following shows executing of the first ( script in the beginning of each test class:
public class AbstractTest {
    private Process takeScreenshotsProcess = null;

    protected void setupDevice1() throws Exception {
        takeScreenshotsProcess = startTakingOfScreenshots(DEVICE_1_NAME);
        //for readability omitted Appium API calls to setup device for testing

    protected Process startTakingOfScreenshots(String deviceName) throws Exception {
        String[] cmd = { "sh/", getDeviceSerialNumber(deviceName)};
        return Runtime.getRuntime().exec(cmd);

    public void tearDown() {
        if(takeScreenshotsProcess != null) {
Hopefully the code above is somehow self explanatory. It starts taking of screenshots before Appium API calls prepare a device for a testing (installs APK, etc.). The same pattern can be used for any number of devices.

Next steps are to use the script in the end of your CI job (e.g. Jenkins). I prefer fine granular CI jobs, which are short to execute, to provide a quick feedback. Therefore, each job is a one test (or matrix of tests), and that is why, starting of taking screenshots is done in the @Before method, and terminated in the @After method.

Please, bear in mind, that previous are just examples. They need to be polished and altered to ones needs. Enjoy testing.

Friday, January 27, 2012

Migration to Arquillian - done

Or how the RichFaces functional tests suites were migrated to Arquillian framework.

Table of contents
  1. Migration motivation
  2. Arquillian Ajocado project set up
  3. Writing tests
  4. RichFaces Selenium vs. Ajocado API

Migration motivation
Initial reason for migrating was a problem with Maven cargo plugin and its support of JBoss AS 7. In a short time, we also realized how many additional advantages Arquillian would bring into our project. My task was to prove this concept by porting functional tests of RichFaces showcase app to the Arquillian framework.

Our former functional test suite was written as Selenium tests, more precisely we used our homemade framework (RichfFaces Selenium)
in top of Selenium 1, from which Arquillian Ajocado was born. You can read more about RichFaces Selenium on its author blog.

So the benefits of the new platform - Arquillian + Arquillian Ajocado - were pretty obvious:
  • support for various containers (JBoss AS 6.0, JBoss AS 7.0, Tomcat 7 and many others, see this for more)
  • some of them are managed by Arquillian, so starting, deploying etc. is done automatically, therefore they are suitable for CI tools like Jenkins for example
  • Drone extension brings features of type safe Selenium 1.0 API by providing Ajocado, and also comes with Selenium 2.0 support and it's WebDriver API
  • tests rapid development with Ajocado
  • Ajocado best feature is not only the type safe API, but also it fills in Selenium gaps with very useful tools for testing Ajax requests, by it's waitAjax and guardXHR methods, which are so essential in AJAX frameworks like RichFaces
  • Arquillian future support of mobile devices testing, and current WebDriver support of mobile devices testing with it's Android and iOS plugins.
  • last but not least Arquillian is an opensource project with quite big community, it is quickly evolving, and as it is with opensource, when you do not have the feature, you can either easily develop it with support of community (which I found out for myself when I was developing Tomcat managed container for Arquillian) or you can file a feature request.

The only drawback, which we were aware of, was the API incompatibilities between RichFaces Selenium and Ajocado. I will return to them in the end.

Arquillian Ajocado project set up
The best way to set up Arquillian project is described in the documentation. As recommended it is good to configure it as Maven project. The first recommendation fulfilled from RichFaces side. 
In short, two configuration files need to be written or altered. Here are the examples of such from migrated RichFaces projects, pom.xml and arquillian.xml.

As you can read in docs, the only thing you need to add to your pom.xml is Arquillian dependencies and some profiles, which represents desired containers on which will be the testing application deployed. There is also an option to run tests from these containers, but in our project it is enough to run them on client.
An example of such dependencies are:

With this, you bring to your project all required Ajocado dependencies, and also WebDriver object. Of course you have other options, like use instead of TestNG the JUnit test framework. For complete set up, again please see the corresponding docs.

Next xml snippet is required Maven profile, which represents container into which our application under test will be deployed. This is an example of JBoss AS 7.1.0.CR1b Arquillian container.
Note that 7.1.0.Final are going to be released soon (7 February 2012), and than it will not take much time to release also arquillian managed dependency. So in order to use newer versions of container, please checkout available maven dependencies (JBoss Nexus) or JBoss AS download page and change accordingly.
When this profile is executed in standard way, mvn -Pjbossas-managed-7-1, the distribution of JBoss AS is downloaded from Maven repo, and unziped to the target directory.
With help of surefire plugin, we then set system property arquillian.launch, which fires the right configuration from arquillian.xml. Indeed you can achieve this by -Darquillian.launch=[correspondingArquillianXMLQualifier]. And lastly, we set up JBOSS_HOME environmental variable,to say to Arquillian where our managed container is installed. Same thing can be achieved by setting up correctly jbossHome property in arquillian.xml.

The last required config file is arquillian.xml, placed on the classpath, so ideal place for it is src/test/resource. Example which set ups config for above mentioned JBoss AS, config for Ajocado, Selenium server and WebDriver would look like:

<arquillian xmlns="" xmlns:xsi=""

    <property name="maxTestClassesBeforeRestart">10</property>

  <container qualifier="jbossas-managed-7-1">
      <property name="javaVmArguments">-Xms1024m -Xmx1024m -XX:MaxPermSize=512m</property>
      <property name="serverConfig">standalone-full.xml</property>
    <protocol type="jmx-as7">
      <property name="executionType">REMOTE</property>

  <extension qualifier="selenium-server">
    <property name="browserSessionReuse">true</property>
    <property name="port">8444</property>

  <extension qualifier="ajocado">
    <property name="browser">*firefox</property>
    <property name="contextRoot">http://localhost:8080/</property>
    <property name="seleniumTimeoutAjax">7000</property>
    <property name="seleniumMaximize">true</property>
    <property name="seleniumPort">8444</property>
    <property name="seleniumHost">localhost</property>

  <extension qualifier="webdriver"> 
    <property name="implementationClass">org.openqa.selenium.firefox.FirefoxDriver</property>

In this file we defined that the maximum number of tests classes which will be executed is 10. Then the container will be restarted. This is a workaround for famous OutOfMemory exception: PERM_GEN thrown after multiple deployments on containers. The number 10 was chosen by multiple running the suite and it seems for now that this is the optimal number of tests, but you should test your test suite and you should choose your number. See also the MaxPermsSize, set for all containers for a quite big chunk of memory, this is also due to above mentioned error.
Next configuration is for JBoss AS managed container, there is JBOSS_HOME omitted since we are setting it up in pom.xml. Then there are additional JVM arguments, mainly increasing the permanent memory size and heap size. These setups seems to be the best effective, we can manage to run 10 test classes and run it quickly. 
Configuration for selenium server consists from property browserSessionReuse, which determines whether the same session of browser should be use, in other words whether there will be start of new browser after each test. The true value accelerates tests quite dramatically. Selenium server need to be run for Ajocado tests, for WebDriver not. For further configuration options, please see the docs.

There is also a need to alter your Java code, to run with Arquillian. If you will be following docs, you will be successful for sure. I am just providing our approach. 
We have one base class which is common for whole test suite. It contains method for deploying application under test. We are deploying whole application for all test classes, which will be probably replaced by deploying only what is needed, with help of ShrinkWrap project. This improvement should accelerate the testing.
Since we want to write both Ajocado and WebDriver tests we are providing two classes which particular tests will extends. It is not possible now to have Ajocado and WebDriver objects simultaneously accessible from the same test class.

Writing tests
Writing tests with Ajocado and Arquillian is really simple and fast. It is so because of Ajocado is targeting on rapid development with his OO API as much as possible. With these features and modern IDE code completing, it is more pleasure than struggle to write tests. This is an example of such test, lets examine it further.

So, as I mentioned above, to run successfully test, there should be:

Method for deploying application under test:

@Deployment(testable = false)

public static WebArchive createTestArchive() {

        WebArchive war = ShrinkWrap.createFromZipFile(WebArchive.class, new    File("target/showcase.war"));

        return war;

We are deploying a war, which was copied into project build directory with help of Maven dependency plugin. This method is located in the parent of test, as it is the same for all tests. The argument in the annotations stands for running tests on the client rather than on server side.

Method for loading correct page on the browser:

@BeforeMethod(groups = { "arquillian" })
public void loadPage() {

  String addition = getAdditionToContextRoot();
  this.contextRoot = getContextRoot();, "/showcase/", addition));

We have set our tests that it is loading the correct page according to the test class name, so the method getAdditionToContextRoot is dealing with it, the context root can be set in arquillian.xml and finally you just load that page as you are used to with Selenium 1, but again instead of String you are using higher object. Just note that if you are using testng.xml for including some test groups, you need to add you before, after ... methods to the group arquillian, and also to include this group in particular testng.xml

protected JQueryLocator commandButton = jq("input[type=submit]");
protected JQueryLocator input = jq("input[type=text]");
protected JQueryLocator outHello = jq("#out");

public void testTypeSomeCharactersAndClickOnTheButton() {

  * type a string and click on the button, check the outHello
  String testString = "Test string";

  //write something to the input

  selenium.typeKeys(input, testString);

  //check whether after click an AJAX request was fired

  String expectedOutput = "Hello " + testString + " !";
  assertEquals(selenium.getText(outHello), expectedOutput, "The output should be:   " + expectedOutput);

I think the test is pretty much self explanatory. Here you can also see the differences between Selenium 1 and Ajocado. Aim mainly your focus on the fact that Ajocado uses various objects instead of just String for everything. In this example it is JQyeryLocator, which provide convenient and fast way for locating page elements by JQuery selectors.

RichFaces Selenium vs. Ajocado API
The API differences where one of the last problems we had. As we were using RichFaces Selenium, which has very similar API to Ajocado, the migration could be done automatically. However, at first I had to migrate the whole suite manually to see exactly the differences. With the list of API changes I was able to develop small Java app to automate this migrating in future. It was created mainly for our purposes, as there were new tests to migrate each day, but you can accommodate that app for your purposes too. It is nothing big, it can be probably easily done in bash, but I am quite weak in bash scripting, so I did in Java.

For complete Ajocado vs. RichFaces Selenium differences, please visit the mention app sources, where you can find all. Here I am providing just the most important ones:

Richfaces Selenium Ajoado
ElementLocator.getAsString ElementLocator.getRawLocator
AjaxSeleniumProxy.getInstance() AjaxSeleniumContext.getProxy()
SystemProperties SystemPropertiesConfiguration, and its methods are no more static, for example seleniumDebug is retrieved in this way: AjocadoConfigurationContext.getProxy().isSeleniumDebug()
JQueryLocator.getNthOccurence JQueryLocator.get
RetrieverFactory.RETRIEVE_TEXT TextRetriever.getInstance()
removed getNthChildElement(i) can be replaced by JQueryLocator(SimplifiedFormat.format("{0}:nth-child({1})", something.getRawLocator(), index));
RequestTypeGuard RequestGuardInterceptor
RequestTypeGuardFactory RequestGuardFactory
RequestInterceptor RequestGuard
CommandInterceptionException CommandInterceptorException
keyPress(String) keyPress(char)
keyPressNative(String) keyPressNative(int), so it is possible now to use KeyEvent static fields directly
isNotDisplayed elementNotVisible, what is important about all displayed vs visible change is that visible methods will fail when the element is not present, displayed methods will return true, so keep it in mind while using visible methods, whether you need to use at first elementPresent
selenium.isDisplayed selenium.isVisible
selenium.getRequestInterceptor() selenium.getRequestGuard()
clearRequestTypeDone() clearRequestDone()
waitXhr, waitHttp guard(selenium, RequestType.XHR), guard(selenium, RequestType.HTTP)
FrameLocator it has now two implementations FrameIndexLocator and FrameDOMLocator
ElementLocator methods almost all mothods were removed, only few lasted, since now ElementLocator is implementing Iterator interface, and it is possible to replace them easily with it. Example of this is here (see the initializeStateDataFromRow method)