I bought a used LeCroy WaveRunner 104MXi oscilloscope recently, and have been working on upgrading it to more recent components (the scope itself was made in 2008). The PC-based architecture of the scope, using mostly standard components, makes a lot of the upgrades possible, and many of these upgrades should be applicable to the entire range of WaveRunner Xi/MXi and WaveSurfer Xs/MXs scopes as well. Lots of credit go to the many great threads on the EEVblog forums and the LeCroy owners mailing list.
It's a good idea to backup some of the data on the scope before performing upgrades:
For the scope's original HDD, I used the free version of Macrium Reflect to backup all of the HDD. The important folder to backup is the “Calibration” folder under the USERDATA (D:) drive. For a used scope, you can also read the SMART data from the HDD to see roughly how much the scope was used in its prior life.
To backup the DS2433 EEPROM on the PCI board, I made a UART-based reader following Maxim tutorial 214 (Using a UART to Implement a 1-Wire Bus Master). My version uses an Adafruit USB to TTL serial cable with a 1kΩ resistor:
In my case, the yellow wire is DATA and the orange wire is GND. On the PCI board, there are existing vias near the EEPROM that I used to hold down the pins, without the need to solder or desolder anything. On the PC side, I used the Maxim OneWireViewer to read the data.
To disassemble the scope, you'd need both T10 and T8 Torx screwdrivers. The back housing comes off first, then the front panel, then the front-end/acquisition board, and finally the motherboard. Be aware that for the acquisition board, there is a screw underneath a heatsink that is quite hard to get to. Make sure you take lots of photos of cable connections and screw locations to make it easier to reassemble the scope.
I ordered a Pentium M 765 (2.1GHz) Dothan CPU that worked perfectly. This CPU is the fastest you can get for the default 400MHz FSB. There are reports of the MX855LC motherboard not supporting 90nm Dothan CPUs, but my scope came with BIOS version 1.0.3A from October 2007, which does support Dothan.
There is a jumper on the motherboard that can enable 533MHz FSB, in which case a Pentium M 780 (2.26GHz) CPU may be possible, but I haven't tried this option.
I ordered one stick of 1GB PC3200 CAS3 DDR DIMM, and it worked fine. 1GB is the highest capacity for these DDR memories. Note that the motherboard only supports PC2700, so getting the fastest PC3200 RAM is not crucial. Also, for the SSD upgrades described below, I would use a RAM stick with integrated heat spreader, because the SSD parts will touch the RAM stick, and the heat spreader acts as a physical barrier.
I ordered a 150GB SATA SSD, and used a PATA-to-SATA bridge to connect to the motherboard. Because the long PATA cable that came with the scope is a 44-conductor cable, it's only rated to UDMA-2, or ATA/33. In order to achieve faster transfer rates, I decided to put the SATA-to-PATA bridge at the motherboard end, and use SATA cables for the connection to the SSD. This required the following parts:
In order for the cable and bridge combination to fit inside the space between the power supply and the RAM stick, the SATA connector on the bridge board may need to be gently bent downwards:
Once connected to the motherboard, the SATA cable should now lie across the top, with the ATX power cable and RAM underneath. With this setup, I also saw some possible interference from the ATX power cable, which made the SATA bridge stop functioning intermittently; adding a shield made from aluminum foil (and lots of Kapton tape) seemed to fix the issue:
At this point, I would suggest putting everything back together and installing Windows 7 on the SSD. You might notice that the SSD is still running at the slower ATA/33 speed. How come? Turns out there is a pin on the PATA connector that tells the motherboard if the PATA cable is 40/44-conductor or 80-conductor. For ATA/66 and ATA/100 support, we need to modify the PATA-to-SATA bridge to make the motherboard think we have a 80-conductor cable. The modification involves shorting pin 34 (DMA66_Detect) and pin 30 (GND) together on the PATA connector:
But why not do this modification before installing Windows? I've noticed that for some reason, Windows has trouble booting up when running at ATA/100, which is the default speed once we make the modification. To make Windows boot reliably, we need to first limit the speed to ATA/66, which can be done via a registry setting after installing Windows but before making the modification. The limit can be set with the following registry change:
Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\PCIIDE\IDEChannel\4&1a057fde&0&0\Device Parameters\Target0] "UserTimingModeAllowed"=dword:0000ffff
In summary, the most reliable procedure for using the SSD at ATA/66 speed is,
After these steps, Windows should be running reliably at UDMA-4 or ATA/66. Although complicated, I think it's worth it for getting some extra speed out of the SSD. My finished setup looks like this:
I ordered an NEC NL10276BC20-18D LCD assembly in order to upgrade the LCD to 1024×768 resolution. I chose that particular model because it was the cheapest on eBay at the time, and its connector is compatible with the original LVDS connector in the scope. However, a better choice would be NL10276BC20-04, which has the same mounting bracket as the original LCD assembly in the scope, and would make upgrading a breeze.
In order to use NL10276BC20-18D, I had to basically disassemble the LCD down to the bare panel itself, and combine it with almost everything else (mounting bracket, CCFL backlight, etc.) from the original assembly. It wasn't pretty but it worked at the end.
The original LCD assembly used a converter board to convert LVDS to parallel signals at the panel. The new LCD panel has an LVDS connector already, so the converter board is no longer needed. However, the pins on the LVDS cable have to be rearranged somewhat to match the pinout of the new LCD panel:
As seen from the photos above, to make the new pinout, I moved:
The new LCD panel should work at this point. To fix the LCD image during boot-up, make sure to change the LCD type to 1024×768 in the BIOS settings.
The touch screen on my scope came with minor damage, so I replaced it with a generic 10.4” 4-wire touch screen (the one I ordered was specified as a replacement for “N010-0554-X225/01”). I had to remove the old touch screen from the attached metal bracket, so I could reuse the bracket for the new touch screen; to help with that, I used a hot air gun to soften the adhesive, so I could pry away the old touch screen.
Another issue here is that the new touch screen had a shorter flex cable compared to the old one. The old flex cable was longer than the width of the display and reached across the back of the LCD panel to connect to the front panel board. To make the shorter flex work, I flipped the orientation of the new touch screen so that its flex cable side is right next to the front panel board. However, this meant that the coordinates of the new touch screen are now flipped compared to the coordinates of the old touch screen. To correct for this, I flipped the coordinates in software in the touch screen driver I wrote (See the “Windows 7” section).
I didnt touch the PSU at all because it's working fine on my scope. However, if (when?) it does fail, I think it's possible to design a replacement using pinouts of the various power connectors that you can find online.
Windows 7 is the latest version of Windows that the motherboard supports, due to its Intel i82855GME chipset. When installing Windows, make sure you create two partitions: the “SYSTEM” partition (C:) where Windows is installed, and the “USERDATA” partition (D:). Windows 7 almost works out-of-the-box, and I only had to install a few extra drivers:
For Windows Update to work, the Windows Update Agent must first be updated to the latest version (see Microsoft KB Article 949104).
To install the LeCroy software, first download and run “xstreamdsodrivers.exe” from the LeCroy support site to install the driver for the acquisition board. Then download and run “xstreamdsoinstaller_8.6.2.10.exe”, which seems to be the latest available XStream software for x86. Before running the XStream software, make sure to restore the “Calibration” folder under the D: drive, and make sure the drive is renamed to “USERDATA”, in order for the software to find it.
The LeCroy software comes with a Windows service called “lectouchscreenctrl.exe” that lets you use the scope's touch screen inside Windows. Unfortunately, this service is an “interactive service” and is not supported by Windows 7. To make the touch screen work, I disabled the service, and wrote a custom driver that exposes the front panel USB HID device as a native Windows touch device. Right now the driver works but is a little finicky; I hope to share it once it's more stable.
It's been pretty time-consuming but also very rewarding to upgrade this scope over the last month or so. And over time, I got pretty good at taking it apart. If it's your first time disassembling the scope, make sure to take lots of pictures along the way, and also take pictures of screws so you know which screw goes where when you re-assemble it later. Obviously take ESD precautions, and wear gloves if possible to avoid leaving skin residues on the sensitive analog PCBs, which could affect their performance. Avoid taking apart the BNC front-end board and the acquisition board! You don't need to take those two apart for any of the upgrades, and it can be a pain to put them back together (I ended up shearing off one of the screws when putting them back Lesson learned!).
Proxmox 4 doesn't seem to support pinning a VM's CPUs to specific host CPUs. It also doesn't support VM startup hooks, so there's no straightforward way to run taskset
on the newly created VM. However, when the QEMU process is created, it writes its PID to the file /var/run/qemu-server/$id.pid
, where $id
is the VM ID. By watching writes to this file, e.g. through inotifywait
, it's actually pretty easy to create a startup hook to run taskset
or perform any other tasks.
I wrote a systemd service that watches the /var/run/qemu-server
directory, and automatically calls taskset
on newly created VMs processes based on configuration files.
[Unit] Description = Auto taskset service [Service] Type = simple ExecStart = /bin/bash -c " \ conf=/etc/autotaskset; \ dir=/var/run/qemu-server; \ mkdir -p \"$$dir\"; \ /usr/bin/inotifywait -mq -e modify --format %%f \"$$dir\" | \ while read pid; do \ [ -f \"$$conf\"/\"$$pid\" ] && \ /usr/bin/taskset $$(< \"$$conf\"/\"$$pid\") $$(< \"$$dir\"/\"$$pid\"); \ done" [Install] WantedBy = multi-user.target
To enable the service, install the inotify-tools
package, then run systemctl enable autotaskset
.
To configure CPU pinning for each VM, create a file /etc/autotaskset/$id.pid
, where $id
is the VM ID, that contains all arguments to taskset
. For example, for a VM with ID 100, if I want to pin the VM's 4 CPUs on the host's every other CPU (to skip the second hyper-threaded CPU on each physical core for example), I would use the following conf file.
-cp 0,2,4,6
The -c
option enables specifying the host CPU by its ID, and the -p
option is required because we're operating on an existing process given its PID.
I'm building a homelab server running Proxmox with a Windows 10 VM for daily usage, so I wanted to use PCI passthrough to let the VM access the GTX 1060 graphics card that's installed in the server. Here's what I had to do to get past the infamous “code 43” error from the Nvidia driver when you try to pass through consumer-grade cards to Windows VMs. Credit for the first three points goes to this post on the vfio-users mailing list.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_1C03&SUBSYS_85AE1043&REV_A1\x&xxxxxxxx&x&xxxx\Device Parameters\Interrupt Management\MessageSignaledInterruptProperties\MSISupported
for video and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_10F1&SUBSYS_85AE1043&REV_A1\x&xxxxxxxx&x&xxxx\Device Parameters\Interrupt Management\MessageSignaledInterruptProperties\MSISupported
for audio. Setting both to 1 enables MSI.With these steps, I was able to finally have a working passed-through graphics card in the Windows VM. Note that every time you update the graphics driver in the VM, you may have to repeat the steps to re-enable MSI, because new drivers create new keys in the registry that don't have the “MSISupported” entries. edit Note that a Windows whole system update may also automatically update the drivers and reset the MSI flags; if you get stuttering audio after an update, this may be the cause.
The Lenovo LI2264d (or LI2364d) monitor doesn't support VESA mounts directly, but here's one way to mount it on a VESA mount. All the parts can be bought for < $10 at stores like Home Depot, and takes a couple hours at most to put together.
First, take apart the vertical part of the stand that came with the monitor. There is a black plastic cover that can be taken off by removing the screws, revealing the assembly inside.
We want to reuse the bare metal part that fits into the monitor body. I used a 5/16“ wrench to reach into the assembly and unscrew the hex nuts.
After disassembly, you should also end up with two screws and a bunch of washers/spacers. The washers are important because we'll later use them to match the width of our bracket to the width of the monitor mount.
To make the bracket that attaches the VESA mount to the monitor mount, I used two corner braces. The 3” ones from Home Depot worked for me because, 1) the pre-drilled holes are big enough for the screws that fit into the monitor mount, and 2) the distance between the edge and the inner holes, plus the width of the monitor mount, is close to the 75mm distance on the VESA mount.
First, I attached the braces to the VESA mount plate that came with the VIVO monitor stand that I'll be using. The assembly consists of M4 screws, hex nuts, and a couple washers
On the other end of the bracket, line up the bracket with the monitor mount from the monitor stand assembly. Use the original screws, hex nuts, washers, and spacers to secure the parts. It may take a little effort to fit them together. Here's the assembled bracket with VESA mount plate on one end and the monitor mount on the other end. This particular assembly actually didn't work because it was too wide for the recess in the monitor body that you fit the monitor mount into. I had to take this apart and move the outside spacers to the inside for it to fit.
At this point I thought I was done, but of course when I tried to fit the monitor mount onto the monitor body, it wouldn't fit because part of the bracket was pushing into top wall of the recess on the monitor body. Ugh! I had to cut two grooves into that wall to let the bracket through.
And with that… the mount finally fits!
It's working pretty well with the monitor stand. The monitor doesn't weigh that much, so I don't expect any problems for regular landscape usage. Actually, the mount feels sturdy enough sideways that you might even get away with using it in portrait.
Edit: I've since switched the corner braces to smaller braces from Menards. These braces also fit the 75mm VESA mounts, but require some additional spacers to give enough clearance to the back connectors. One benefit of the smaller braces is they give a greater degree of tilt adjustment.
The Fennec LogView add-on has been updated to version 1.2, and it now supports Android Lollipop, Marshmallow, and above.
The previous version read directly from the Android logger device located at /dev/log/main, which worked well on Android 4.x. However, starting with Android 5.0, apps no longer have read permission to /dev/log/main. Fortunately, Android 5.0 also added several new APIs in the liblog.so library specifically for reading logs. LogView 1.2 uses these new APIs, when available, through js-ctypes, and this approach should continue to work on future versions of Android as well.
There has been a series of recent changes to the Fennec platform code (under widget/android). Most of the changes was refactoring in preparation for supporting multiple GeckoViews.
Currently, only one GeckoView is supported at a time in an Android app. This is the case for Fennec, where all tabs are shown within one GeckoView in the main activity. However, we'd like to eventually support having multiple GeckoView's at the same time, which would not only make GeckoView more usable and make more features possible, but also reduce a lot of technical debt that we have accumulated over the years.
The simplest way to support multiple GeckoViews is to open multiple nsWindows on the platform side, and associate each GeckoView with a new nsWindow. Right now, we open a new nsWindow in our command line handler (CLH) during startup, and never worry about having to open another window again. In fact, we quit Fennec by closing our only window. This assumption of having only one window will change for multiple GeckoView support.
Next, we needed a way of associating a Java GeckoView with a C++ nsWindow. For example, if a GeckoView sends a request to perform an operation, Gecko would need to know which nsWindow corresponds to that GeckoView. However, Java and platform would need to coordinate GeckoView and nsWindow creation somehow so that a match can be made.
Lastly, existing messaging systems would need to change. Over the years, GeckoAppShell has been the go-to place for platform-to-Java calls, and GeckoEvent has been the go-to for Java-to-platform calls. Over time, the two classes became a big mess of unrelated code stuffed together. Having multiple GeckoViews would make it even harder to maintain these two classes.
But there's hope! The recent refactoring introduced a new mechanism of implementing Java native methods using C++ class members 1). Using the new mechanism, calls on a Java object instance are automatically forwarded to calls on a C++ object instance, and everything in-between is auto-generated. This new mechanism provides a powerful tool to solve the problems mentioned above. Association between GeckoView and nsWindow is now a built-in part of the auto-generated code – a native call on a GeckoView instance can now be transparently forwarded to a call on an nsWindow instance, without writing extra code. In addition, events in GeckoEvent can now be implemented as native methods. For example, preference events can become native methods inside PrefHelper, and the goal is to eventually eliminate GeckoEvent altogether 2).
Effort is underway to move away from using the CLH to open nsWindows, which doesn't give an easy way to establish an association between a GeckoView and an nsWindow 3). Instead, nsWindow creation would move into a native method inside GeckoView that is called during GeckoView creation. As part of moving away from using the CLH, making a speculative connection was moved out of the CLH into its own native method inside GeckoThread 4). That also had the benefit of letting us make the speculative connection much earlier in the startup process.
This post provides some background on the on-going work in Fennec platform code. I plan to write another follow-up post that will include more of the technical details behind the new mechanism to implement native calls.
The LogView add-on for Fennec now lets you copy the logcat to clipboard or post the logcat to pastebin.mozilla.org. Simply go to the about:logs
page from Menu → Tools → Logs and tap on “Copy” or “Pastebin”. This feature is very useful if you encounter a bug and need the logs, but you are not next to a computer or don't have the Android SDK installed.
Back in January, I left on a two-month-long leave from Mozilla, in order to do some traveling in China and Japan. Now I'm finally back! I was in China for 1.5 months and in Japan for 2 weeks, and it was amazing! I made a short video highlighting parts of my trip:
Being a mobile developer, I naturally paid some attention to mobile phone usage in China, and how it's different from what I'm used to in the U.S. The cellular infrastructure was impressive. It was fairly cheap, and I was getting full 3G/4G service in small villages and along high-speed rail routes. It seemed like everyone had a smartphone, too. I would see grandmas standing on the side of the road checking their phones.
I never use QR codes in the U.S., but I actually used them quite often in China. For example, you would scan another person's QR code to add them as friends on Wechat. In some places, you could scan a merchant's QR code to pay that merchant using Alipay, a wallet app. Many types of tickets like train tickets and movie tickets also use QR codes over there.
Everyone used Wechat, a messaging app that's “way better than anything else in the U.S.” according to my American friend living in China. It's more than just a messaging app though – you have a “friend circle” that you can post to, a la Facebook; you can also follow “public accounts”, a la Twitter. The app has integrated wallet functionality: I paid for a train ticket and topped up my phone using the app; during Chinese New Year, people were sending each other cash gifts through it.
For some reasons, you see a lot of these “all-in-one” apps in China. I used Baidu Maps during my travel, which does maps and navigation. However, you can also call taxis from within the app or hire a “private car”, a la Uber. You can use the app like Yelp to find nearby restaurants by type and reviews. While you're at it, the app lets you find “group buy” discounts to these restaurants, a la Groupon. I have to say it was super convenient. After I came back to the States, I wasn't used to using Google Maps anymore because it didn't do as much.
Of course, on the flip side, these apps probably would be less popular without the Internet censorship that's so prevalent over there. By creating a barrier for foreign companies to enter the Chinese market, it provided opportunities for domestic companies to create and adapt copycat products. I found it amusing that Android is so prevalent in the Chinese smartphone market, but everything Google is blocked. As a result, you have all these third-party markets that may or may not be legitimate. Mobile malware seems to be a much larger issue in China than in the U.S., because people have to find their apps off of random markets/websites. It was strange to see an apps market promising “safe, no malware” with every download link. Also amusingly, every larger app I saw came with its own updater, again because these apps could not count on having a market to provide update service.
Overall, the trip was quite eye-opening, to see China's tremendous development from multiple angles. I loved Japan, too; I felt it was a lot different from both China and the U.S. Maybe I'll write about Japan in another post.
Historically, JNI code in Fennec has mostly used raw JNI types like jobject
and jstring
. However, the need to manage object lifetimes and the lack of strong typing make this practice error-prone. We do have some helper classes like AutoLocalJNIFrame
, RefCountedJavaObject
, WrappedJavaObject
, and AutoGlobalWrappedJavaObject
, but I've found them to be inconvenient to use.
Bug 1116868 is introducing several new “smart” classes to improve dealing with JNI references. As a start, instead of using raw jobject
types, there are now different smart types for different usages:
Type | When to use |
---|---|
Object::LocalRef | To replace local jobject references; e.g. local variables |
Object::GlobalRef | To replace global jobject references; e.g. instance members |
Object::Param | To replace jobject function parameters |
Note that these new classes are under the mozilla::jni
namespace, so your code should include it first,
using namespace mozilla::jni; // then use Object::LocalRef namespace jni = mozilla::jni; // then use jni::Object::LocalRef
These classes make managing lifetimes very easy. Previously, it was easy to make mistakes like this,
jobject obj = GetObject(); // use obj obj = GetAnotherObject(); // oops! first obj was leaked
This mistake could happen to both local and global references. Now with the smart classes, these errors are eliminated,
Object::LocalRef obj = GetObject(); // use obj obj = GetAnotherObject(); // first obj was automatically deleted
The new classes also make it easy to convert between local and global references,
// Object::GlobalRef mObj; Object::LocalRef obj = mObj; // automatic conversion from global to local ref obj = GetNewObject(); mObj = obj; // automatic conversion from local to global ref
For function parameters, you can pass either a LocalRef
or a GlobalRef
to a Param
parameter,
void SetObject(Object::Param obj); Object::LocalRef obj; SetObject(obj); // pass in LocalRef // Object::GlobalRef mObj; SetObject(mObj); // pass in GlobalRef
For function return values, you should return a LocalRef
per JNI convention,
Object::LocalRef GetObject() { Object::LocalRef ret = MakeObject(); return ret; }
Note that in the above example, only one local reference is ever created because of return value optimization performed by the compiler.
Both LocalRef
and GlobalRef
support move semantics, so the following example still creates only one local reference overall,
Object::LocalRef GetObject(); Object::LocalRef foo; foo = GetObject();
It also means you can use LocalRef
and GlobalRef
with container classes like mozilla::Vector
without worrying about performance impact.
LocalRef
and GlobalRef
can be used like pointers/Java references,
Object::GlobalRef ref = nullptr; ref = GetRef(); if (ref) { Object::LocalRef ref2 = ref; // compare underlying objects, so a LocalRef // and a GlobalRef of the same object are equal MOZ_ASSERT(ref == ref2); }
jobject
is not the only wrapped type; other JNI types correspond to different smart types,
JNI type | Use these smart types | ||
---|---|---|---|
jstring | String::LocalRef | String::GlobalRef | String::Param |
jclass | ClassObject::LocalRef | ClassObject::GlobalRef | ClassObject::Param |
jthrowable | Throwable::LocalRef | Throwable::GlobalRef | Throwable::Param |
jbooleanArray | BooleanArray::LocalRef | BooleanArray::GlobalRef | BooleanArray::Param |
jbyteArray | ByteArray::LocalRef | ByteArray::GlobalRef | ByteArray::Param |
… | … | ||
jobjectArray | ObjectArray::LocalRef | ObjectArray::GlobalRef | ObjectArray::Param |
And if you use auto-generated classes from widget/android/GeneratedJNIWrappers.h, each class is being updated in bug 1116589 to have its own reference types. For example, mozilla::widget::ViewTransform::LocalRef
is a local reference to a Java ViewTransform
instance,
ViewTransform::LocalRef vt = ViewTransform::New(); float x = vt->OffsetX(); // get offsetX field vt->OffsetX(1.0f); // set offsetX field
Using separate classes ensures better type-safety, because type-checking is done by the compiler,
Foo::LocalRef foo; Bar::LocalRef bar = foo; // error: invalid conversion
However, the auto-generated classes often only accept Object::Param
parameters or return Object::LocalRef
values. Therefore, as a special case, any LocalRef
or GlobalRef
can automatically convert to Object::Param
, and Object::LocalRef
can automatically convert to any other LocalRef
.
Object::LocalRef Foo(Object::Param foo); Bar::LocalRef bar; bar = Foo(bar); // bar is converted to Object::Param // then return value is converted to Bar::LocalRef
The String
types have some custom behavior in addition to the standard behavior. A String::LocalRef
or String::GlobalRef
can automatically convert to a nsString
or a nsCString
,
String::LocalRef GetString(); nsString str = nsString(GetString()); nsCString cstr = nsCString(GetString());
Conversely, a nsAString
or a nsACString
can automatically convert to a String::Param
,
void SetString(String::Param param); nsString str; SetString(str); SetString(NS_LITERAL_CSTRING("text")); String::LocalRef ref; SetString(ref); // okay too
A String::LocalRef
, String::GlobalRef
, or String::Param
also has a Length
method,
size_t GetStringLength(String::Param param) { return param.Length(); } MOZ_ASSERT(GetStringLength(NS_LITERAL_STRING("text")) == 4);
The goal of these new classes is to make using raw JNI types obsolete. However, until all the refactoring is done, there are still cases where raw JNI values are needed, for example to call JNIEnv functions.
To get a raw JNI reference from any LocalRef
or GlobalRef
, call its Get
method,
void Foo(jobject param); Object::LocalRef obj; Foo(obj.Get());
To turn a raw JNI reference into any Param
, call the Foo::Ref::From
method,
void Foo(Object::Param param); jobject obj; Foo(Object::Ref::From(obj));
To return a raw JNI reference from any LocalRef
or GlobalRef
, call its Forget
method,
jobject GetRawRef() { Object::LocalRef ref = GetRef(); return ref.Forget(); }
Use LocalRef::Adopt
to manage a returned raw local reference,
jobject GetRawRef(); Object::LocalRef ref = Object::LocalRef::Adopt(GetRawRef());
LocalRef
also has an Env
method that returns a cached JNIEnv
pointer,
Object::LocalRef ref = GetRef(); auto cls = ClassObject::LocalRef::Adopt( ref.Env()->GetObjectClass(ref.Get()));
We use the adb logcat
functionality a lot when developing or debugging Fennec. For example, outside of remote debugging, the quickest way to see JavaScript warnings and errors is to check the logcat, which the JS console redirects to. Sometimes, we catch a Java exception (e.g. JSONException) and log it, but we otherwise ignore the exception. Unless you are actively looking at the logcat, it's easy to miss messages like these. In other cases, we simply want a way to check the logcat when away from a computer, or when a user is not familiar with adb
or remote debugging.
The LogView add-on, available now on AMO, solves some of these problems. It continuously records the logcat output and monitors it. When it sees an error in the logcat, the error is displayed as a toast for visibility.
You can also access the current logs through the new about:logs
page.
The add-on only supports Jelly Bean (4.1) and above, and only Fennec logs are included rather than logs for all apps. Check out the source code or contribute on Github.
Feature suggestions are also welcome! I think the next version will have the ability to filter logs in about:logs
. It will also allow you to copy logs to the clipboard and/or post logs as a pastebin link.