Rick Strahl's FoxPro and Web Connection Web Log
 

Clicks not working in Internet Explorer Automation from FoxPro


1 comment
Sunday, July 19, 2015, 10:00:45 PM

A few days ago, somebody posted a question on our message board mentioning that when using Internet Explorer Automation (using COM and InternetExplorer.Application) fails to automate click events in recent versions of Internet Explorer. A quick check with my own code confirmed that indeed clicks are not properly triggering when running code like the following:

o = CREATEOBJECT('InternetExplorer.Application') o.visible = .t. o.Navigate('http://west-wind.com/wconnect/webcontrols/ControlBasics.wcsx') DO WHILE o.ReadyState != 4 WAIT WINDOW "" TIMEOUT .1 ENDDO loWindow = o.document.ParentWindow ? loWindow *loWindow.execScript([alert('hello')]) oLinks = o.Document.getElementsByTagName('a') oLink = oLinks.item(0) ? oLink.href
oLink.click() && doesn’t work o.document.getElementById('txtName').value = 'Rick' oButton = o.document.getElementById('btnSubmit') ? oButton oButton.Click() &&doesn’t work

Note the link and button clicks – when this code is run with Internet Explorer 10 or later the page navigates but the clicks are never registered in the control. Now this used to work just fine in IE 9 and older, but something has clearly changed.

IE 10 – DOM Compliance comes with Changes

Internet Explorer 10 was the first version of IE that supports the standard W3C DOM model, which is different than IE’s older custom DOM implementation. If you’re working with IE COM Automation you will find there are number of small issues that have changed and that can cause major issues in applications. In Html Help Builder which extensively uses IE automation to provide HTML and Markdown editors, I ran into major issues at the time when IE was updated. There both actual DOM changes to deal with the w3C compliance, as well as some behavior changes in the actual COM interface to accessing the DOM from external applications.

The issue in this case is the latter. The problem is that IE is exposing DOM elements natively which means the DOM elements are exposed using the native JavaScript objects as COM objects. Specifically JavaScript always have at least one parameter which is the arguments array and that’s reflected in the dynamic COM interface.

JavaScript Method Calls Require a Parameter

The workaround for this is very simple – instead of calling

.Click()

you can call

.Click(.F.)

Passing the single parameter matches the COM signature and that makes it all work. Thanks to Tore Bleken who reminded me of this issue that I’ve run into myself countless times before in a few other scenarios.

So the updated code is:

o = CREATEOBJECT('InternetExplorer.Application')
o.visible = .t.
o.Navigate('http://west-wind.com/wconnect/webcontrols/ControlBasics.wcsx')
 
DO WHILE o.ReadyState != 4
   WAIT WINDOW "" TIMEOUT .1
ENDDO
  
* Target object has no id so navigate DOM to get object reference
oLinks = o.Document.getElementsByTagName('a')
oLink = oLinks.item(0)
* oLine.Click(.F.) 
 
 
o.document.getElementById('txtName').value = 'Rick'
oButton = o.document.getElementById('btnSubmit')
? oButton
oButton.Click(.F.)

The hardest part about this is to remember that sometimes this is required other times it is not – it depends on the particular implementation of the element you’re dealing with. In general if you are dealing with an actual element of the DOM this rule applies. I’ve also run into this with global functions called from FoxPro.

The rule is this: Whenever you call into the actual HTML DOM’s native interface, you need to do this. For example, if you define public functions and call them from FoxPro (o.document.parentWindow.myfunction(.F.)) you also need to ensure at least one parameter is passed. As a side note, functions have to be all lower case in order for FoxPro to be able to call them, due to FoxPro forcing COM calls to lower case and the functions being case sensitive in JavaScript. 

These are silly issues that if FoxPro were still supported would probably be fairly easy to fix. Alas, since it’s done, we’ll have to live with these oddball COM behaviors. Luckily there are reasonably easy solutions to work around some of the issues like the simple parameter trick above.


Drive Mapping in Web Applications


No comments
Sunday, June 28, 2015, 6:31:30 PM

Over the last couple of weeks a number of questions came up in regards to getting access to network drives from within your Web Connection applications. Drive mapping errors can be really insidious because you often don't know what's actually causing the problem as the problems usually occur right during application startup where it's difficult to debug. Additionally it's not obvious that the error is drive mapping vs. some other issue like permissions or some other failure in the application.

This may seem trivial but when you are developing applications and when you are deploying applications often provides a very different runtime environment, which can cause problems when it comes to mapping drives. The reason is that Web applications usually run under a system context, rather than a user context.

IIS User Context

So in IIS a Web Connection COM Server or IIS launched standalone EXE file are typically running inside of the security context of an Application Pool. The security context is determined by the Application Pool Identity which is set in the Application Pool's Advanced settings in the IIS Management Console:

ApplicationPoolIdenity

We recommend that you set the Application Pool identity and if you're running the server as a COM Server leave the DCOM settings as the Launching user which inherits these settings into the COM server. This way configuration is left in a single place as the security flows from IIS into the COM object.

Regardless of whether you use a system account like SYSTEM or Network Service, or a full local or domain account, the accounts loaded by default do not have a user profile (although you can enabled that but I don't recommend it) which means standard mechanisms of loading common startup and system settings are not applied. One thing that this means is that drive mappings are never persisted across multiple logins as that's part of a user profile.

So while you may have mapped a network drive with your user account that network drive – even if persistently mapped – will not be visible by a different account, or even the same account when loaded under IIS. This means you need to make sure that either your application, or the Windows subsystem that the account runs under loads up any drive mappings.

Mapping Drives Or UNC Paths?

There are a couple of ways you can access network drives: You can use mapped drives where a network share and path are mapped to a drive letter, or you can use the raw UNC paths to access the resources directly.

UNC Paths

UNC paths appear to be simpler at a glance because they are a direct connection to a remote resource. A UNC path looks like this:

\\Server\Share\Path\file.dbf

and allows you to directly reference a folder or file using a somewhat verbose syntax.

But there are a few problems with UNC paths. First and foremost permissions are often an issue – if you are not referencing the remote path with the same credentials that you are logged on under on the remote machine, a UNC path won't work as you can't easily attach authentication with your file access request.

The other issue is that performance often is not very good. There are mixed reports on this – some people have found that UNC paths work quickly and as fast as mapped drives, but in my experience over slower connections UNC paths tend to be drastically slower in connecting to remote resources. Actual tranfer speeds tend to be find, but connection speeds can often be slow and it appears the the name resolutions are not cached resulting in slow connection delays.

Mappded Drives

Mapped drives let you map a drive letter to remote network resources. Typically you map a drive letter to a share on  a remote server. In Windows this is done using the NET USE command:

net use i:  \\server\cdrive "password" /User:"username" /persistent:yes

This maps a i: drive to a remote resource.

As mentioned the tricky part with Drive mappings is to know where they are visible. When net use is issued it's valid only for that specific user's context. This is why if you map a drive to your desktop user while logged is not going to be visible to the SYSTEM account or even your own account when running in the non-interactive console.

What this means is that if you want to map drives you have to map the drives from within the correct context. For a Web application running in IIS this means setting up the mapping either as part of the startup code of the application or as part of Windows startup scripts that are fired when a Windows session is started.

Mapping from within an Application

Personally I prefer to handle drive mapping as part of the application to keep the drive dependecies configured in the same place as many other configuration settings. The key is to do the configuration before you access any resources that require these drives.

Using Web Connection it's easy to do this in the OnInit() or OnLoad() of the server code:

PROTECTED FUNCTION OnInit
 
...
 
MapNetworkDrive("i:","\\wwserver\data","username","password")
*** Check whether it worked by looking at some resource
IF !DIRECTORY("i:\")
    this.Trace("i: drive couldn't be mapped")    
ENDIF
 
ENDFUNC

MapNetworkDrive() is a helper function from wwApi.prg that essentially shells out to Windows and calls the net use command. It runs in the context of your application so assuming you have rights to map a drive you should be able to map the drive here. It's a good idea to check for some known resource afterwards to see if the drive mapping worked since the function itself gives no feedback. If it fails call the Trace() function to log the info into wwTraceLog.txt so you can see the failure if it occurs and potentially stop loading the application by erroring out with an ERROR() call.

The function is pretty simple:

************************************************************************
*  MapNetworkDrive
****************************************
***  Function: Maps a network drive
***    Assume:
***      Pass: lcDrive     - i:
***            lcSharePath - UNC path to map \\server\share
***            lcUsername  - user name (if empty uses current creds)
***            lcPassword  - password
***    Return: .T. if the drive exists after mapping
************************************************************************
FUNCTION MapNetworkDrive(lcDrive, lcSharePath, lcUsername, lcPassword)
 
IF RIGHT(lcDrive,1) != ":"
   lcDrive = lcDrive + ":"
ENDIF
   
lcRun = [net use ] + lcDrive + [ "] + lcSharePath + [" ]
 
IF !EMPTY(lcUsername)
  lcUserName = ["]  + lcPassword + [" /USER:"] + lcUsername + ["]
ELSE
  lcUserName = ""
ENDIF
 
lcUsername = lcUserName + " /persistent:yes"
 
lcRun = lcRun + lcUsername
RUN &lcRun 
 
*** Check to see if the folder exists now
RETURN DIRECTORY(lcDrive)
ENDFUNC

Using System Policy Startup Scripts

One problem with the application mapping is that the drive is mapped everytime the application starts. While it doesn't hurt to remap drives, there is some overhead in this process as it's slow and if you have multiple instances firing up at the same time there may be some interference causing the map to fail.

Another potentially safer way is to use System Policy to create a startup script that runs a batch file that creates the maps. These scritpts are fired once per Windows session so there's less overhead and no potential for multiple applications trying to map drives simultaneously.

To do this:

  • Open Edit Group Policy
  • Go to Computer Configuration/Windows Settings/Startup
  • Point at a Batch file that includes the net use commands to map your drives

PolicyStartupScript

Summary

Drive mapping can be a major headache in Web applications if you are not careful and plan ahead for the custom execution environment in which system hosted applications run. Essentially make sure you explicitly map your drives either as part of the application's startup or as part of the console system startup whenever a Windows session is started. I hope this short post clarifies some of the issues you might have to deal with in the context of Web Connection applications.


Visual FoxPro and Multi-Threading


1 comment
Thursday, June 18, 2015, 3:57:22 PM

A few days ago somebody asked a question on the Universal Thread on whether it’s possible to run a Visual FoxPro COM component as a free threaded component. Questions like this come up frequently, because there is a general sense of confusion on how Visual FoxPro’s multi-threading actually works, so lets break this down in very simple terms.

Visual FoxPro is not Multi-threaded

The first thing to understand is that Visual FoxPro is not actually multi-threaded in the sense that say a native C++ component is multi-threaded.

Visual FoxPro is based on a very old and mostly single threaded code base. Your FoxPro code that runs as part of your application always runs on a single thread. This is true whether you are running a standalone EXE, inside of the VFP IDE or inside of an MTDLL. Behind the scenes there are a few things that Visual FoxPro does that are truly multi-threaded, but even those operations converge back onto a single thread before they return to your user executed code in your application. For example, some queries and query optimization that use Rushmore are executed on multiple threads as do some ActiveX interactions that involve events. The FoxPro IDE also does a few things interactively in the background. You can also call from FoxPro into a native DLL or a .NET Component and start new threads outside of the actual Visual FoxPro runtime environment. But when we’re talking about the actual code executing in your mainline programs – they are single threaded. 1 instance of your executing FoxPro code == 1 thread essentially.

Keep this in mind – Visual FoxPro code is single threaded and it can’t and won’t natively branch off to new threads. Further FoxPro code assumes pretty much it’s the only thing running on the machine so it’s not a particularly good citizen when it comes to giving up processor cycles for other things running on the same machine. This means Visual FoxPro will often hog the CPU compared to other more multithread aware applications do. Essentially VFP will only yield when the OS forces it to yield.

Visual FoxPro supports running inside of some multi-threaded applications by way of a mechanism called STA – Single Threaded Apartment threading. VFP9T is not thread safe internally. It is thread safe only when running in STA mode called from a multi-threaded COM client that supports STA threading.

So what is a Multi-Threaded VFP COM DLL?

If you create VFP component with BUILD DLL MTDLL you are creating an STA DLL. STA stands for Single Threaded Apartment and it stands for a special COM threading model that essentially isolates the DLL in it’s own separate operating container. STA components are meant for components that essentially are otherwise not thread-safe. A thread safe component is a component that can use a single instance to get instantiated from multiple threads and run without corrupting memory. Essentially this means that a free threaded component needs to be able to have no shared state or if it does have shared state that shared state has to be isolated and blocked behind sequential access (Critical Sections or Mutexes etc.) to essentially serialize the access to any shared state.

Visual FoxPro DLL – MTDLL or otherwise  - do not qualify as thread safe. FoxPro has tons of shared state internally and if that shared state were to be accessed simultaneously from multiple threads – BOOOM! So FoxPro is not a true multi-threaded component.

This answers the original question: Visual FoxPro cannot run as a Free Threaded component directly as a DLL.

However, there are several ways that you can run VFP COM components in free threaded mode but not directly as a DLL. I’ll talk about that a bit later on.

STA For Multi-Threading

Ok so Free Threading is out for a DLL. But you can build MTDLL components which are STA components that can be used in multi-threaded environments that support STA components. For example, you can use ASP or ASP.NET pages to call FoxPro COM components and these components will effectively be able to run multiple simultaneous requests side by side.

This works by way of COM’s Apartment Threading model which essentially creates isolated apartments and caches fully self contained instances of the COM DLL in a separate storage space – the apartment. What this means is that when you create a new VFP COM component COM creates a new COM apartment and loads your DLL and the VFP runtime DLLs into it. COM leaves that apartment up and running. When the next request comes in it activates the same apartment with the already running runtimes inside of it, reattaches the component and thread state and then executes the request in the same apartment. If multiple requests come in simultaneously while all other apartments are busy a new one is created, so effectively you end up with multiple copies of the Visual FoxPro runtime running simultaneously, side by side. As requests come in they are routed to each of these existing apartments and the COM scheduler decides how long the apartments persist.

All of this happens as part of the COM runtime with some logic as part of the STA component in the VFP runtime that makes it possible to launch your VFP COM component and link the VFP runtime which is doing the heavy lifting. Behind the scenes each apartment then has its own thread local storage address space, which can hold what otherwise would be global shared data. A FoxPro MTDLL essentially maps your VFP component to a separate VFP9T runtime that stores all of its shared stated in thread local storage, rather than on the global shared memory. This is why there are a few things that don’t properly work in MTDLL components – the memory usage is different and in fact a bit less inefficient as all shared state and buffers are stored in TLS.

What all this means is that in STA mode when simultaneous requests come in and process at the same time, the VFP runtime effectively runs multiple instances of the runtime side by side and thus ensures that there’s no memory corruption between what would otherwise be shared data.

If you want to understand how this works, try setting up a COM component that runs a lengthy query. Then first create a DLL Component (not MTDLL) then load this into an ASP or ASP.NET application and hit the page with multiple browsers (or a tool like West Wind Web Surge for load testing). After a few hits you’ll like see errors popping up in IIS finding that your COM component crashed. This is due to the memory corruption that occurs when your are not using the STA optimized DLL compilation.

Then recompile in MTDLL mode and try the same exercise again – you’ll find that now you don’t get a crash because the shared state is protected by multiple instances of VFP9T.DLL. If you want to look at this even deeper fire up Process Explorer and open the host process (w3wp.exe for the IIS Application Pool), then drill into the properties and loaded DLL dependencies. You’ll see multiple instances of your DLL and the VFP9T.dll and your application DLL loaded (assuming you’ve loaded the app hard enough to require multiple instances to run simultaneously).

There's no reason not to use MTDLL COM components using STA if you can. STA is effective in isolating instances and that's as efficient as you are going to get. For do nothing requests it's possible to easily get around 2000+ req/sec on mid-range quad core i7 system. However, things get A LOT slower quickly as soon as you add data access and returning object data to the COM client, so the actual raw throughput you can achieve with STA is far outweighed by the CPU overhead of doing anything actual useful inside of your FoxPro code. Running a query and returning a collection of objects of about 80 records for example drops the results down to about 8 req/sec. Call overhead is a nearly useless statistic when it comes to VFP COM objects, so even if free threading were possible and even if it were 10x faster than STA threading, it would have next no effect at all on throughput for most requests that take more than a few milliseconds. STA is the most effective way to get safe and efficient performance.

Where STA and MTDLLs fall short is error recovery. If an STA component fails with a C5 error or anything that hangs a component, the component will remain loaded but won’t respond to further requests. The COM STA scheduler still thinks that component is active and running which results in potentially confusing intermittent errors where say every 3rd request results in an error. There’s no good way to recover from that, short of shutting down the host process (IIS application pool). There’s no way to shut down COM DLLs for maintenance tasks either – short of shutting down the Host process. So if you need to run tasks like reindexing or packing of your data you need to really think ahead about how to do this as you have to kill all instances and then ensure only a single instance (ie. 1 not overeager user) is doing the administration tasks in order not to trigger multiple instances that have files open.

Bottom Line: STA components provide a good simulation of multi-threading and this is the best way for getting multi-threaded components into a multi-threaded host like IIS typically – assuming the host supports STA threading and provided you don’t have very frequent admin tasks that require exclusive access you need to run against the server.

STA Support – not always available

STA is a good option for multi-threaded FoxPro code, but it’s not always available. In fact, more and more STA support is going away because the era of legacy COM components like FoxPro and VB6 has pretty much disappeared for high level development and is relegated now to system components which typically can support free threading. For example, the only ASP.NET technology that officially supports STA components is ASP.NET WebForms. ASP.NET MVC, ASMX Web Services, WCF, Web API and vNext all don’t have native support for STA built in.

There are ways to hack around this and I have a blog post that covers how to get STA components into various technologies:

Creating STA COM compatible ASP.NET Applications

which lets you use FoxPro components in different .NET technologies.

Free Threading – You Can Do It, but…

Earlier I said that it’s not possible to use VFP DLL COM components directly to run in Free Threaded environments. However, there are a couple of options available if you step outside of the box and use some component technology:

  • Use a COM+ Component
  • Use an EXE Server

Both of these technologies essentially create fully self contained objects. COM+ provides a host wrapper that can run either in process or out of process, while an EXE server is on its own a fully self contained instance. Both technologies have one big drawback: They are slow to create and dispose of instances.

COM+

COM+ is a wrapper technology that you can access on Windows using the Component Services plugin (make sure you run the 32 bit version of it for Fox components). COM+ allows you to register a VFP MTDLL COM component which essentially creates a registry redirect to instantiate your COM component through the COM+ runtime. The COM+ runtime creates a host container that essentially provides the STA Apartment that VFP expects and every access of the COM component is then routed through this HOST container. The actual ClassID points at COM+ with special keys that actually point at the ClassID for your component. The container loads itself then loads and executes the actual COM component inside of it.

COM+ supports both In Process and Out of Process activation modes but even the in-process mode tends to be fairly slow adding a lot of overhead for each call made to the container.

COM+ is also a bit of a pain to work with for debugging and updating of components. In order to update a COM component you have to unload the COM+ container and if the interface of the COM object changes you have to reregister the component in the COM+ manager which is fairly painful during development.

EXE Server

You can also use an EXE server to run as a Free Threaded component. Because an EXE server is effectively an out of process component there’s no overlap in shared memory or buffers and so launching an EXE server is an option for executing in a free threaded environment. The limitation with this is that the calling server has to support IDispatch invokation of COM objects.

Like COM+ loading up an EXE component is slow because each time it’s instantiated a new instance of the VFP runtime is required. While the runtime disk images cache it’s still pretty slow to load up and shutdown full processes. However, if performance is not critical and you have to support free-threaded environments this is one of the easiest ways to make it work!

There other alternatives in how to run EXE servers – like running a pool manager of instances so that instances are loaded and then cached rather than having to be restarted each time. I use this approach in West Wind Web Connection, and by removing the overhead of loading a VFP instance each time and keeping VFP’s running state active, raw request throughput performance actually rivals that of DLL server with this approach. However, this sort of thing only works if you can control actual activation as part of the application you are building (as I do in Web Connection).

Summary

Multi-threading in FoxPro is a tenous thing. It’s possible due to some very creative hackery that was done on the VFP runtime to effectively make a single threaded application run in a multi-threaded environment. STA threading is a reasonable solution to make FoxPro work in those environments that support it. Unfortunately STA support is getting less and less implemented by new technologies so it’s harder to find places where STA components will actually work anymore. When STA doesn’t work there are alternatives: Using COM+ or if performance doesn’t matter much EXE servers can be used as well and both of these technologies work in fully free threaded environments at the cost of some performance.

Today if you’re starting with any sort of new project, the recommendation for multi-threaded applications is to look elsewhere than FoxPro. Most other environments have native support for multi-threading so it’s much easier to integrate into multi-threaded environments.


A Preview of Features for Web Connection 6.0


4 comments
Sunday, June 7, 2015, 2:21:07 PM

I've set some time aside to start working new functionality for Web Connection 6.0. I've been really on the fence on whether I wanted to go down this path due to the ever declining FoxPro market. However, there is still a fairly large contingency of developers out there using Web Connection and even a fair amount of users that are coming into Web Connection as new users. The existing version works fine and has most of the features that most developers need, but the truth is that a lot of the samples, documentation and content have gotten really dated. We do build Web apps differently these days and while Web Connection still works fine for all the development scenarios with the updates I've provided over the years, the demos and documentation really don't reflect that very well any more.

So for Web Connection 6.0 I've been planning a few things to make using Web Connection a bit easier. I still work with a lot of customers building Web Connection applications and there are a few things that I think will really improve the usability of the product. While I plan on adding some new features and revamp the default Wizards and configuration, I also want to ensure that there's minimal to no impact on existing applications. While I expect the new project experience and layout to be different, the core engine and functionality won't change much so that existing applications will continue to run just fine on the new version.

Planned Feature Enhancements

There are 4 key areas that I'm planning on addressing with Web Connection 6:

  • Simplified Project Configuration and Management (and documentation)
  • Improved Templating and Scripting
  • Improved support for Mobile Web development (REST Services and Mobile friendly default templates)
  • Overhaul the default templates for new projects and the samples

These are only 4 bullets, but they are actually quite significant in terms of work that needs to be done, especially the latter two, which are mostly redoing the existing examples, as well as reworking the documentation to match. Some of the work fo the third item has already been done in Version 5.70 and forward, but there are additional improvements to be done and integrated.

Let's look at each of these.

Improved Project Setup

One thing that I've heard over the years is that it's a pain to manage Web Connection projects. In the past I've built the library in such a way to make it as easy as possible to create a project and ensure that the code 'just runs' out of the box. The result was that Web Connection generates new projects into the install folder, along with any other projects. If you manage multiple projects this gets messy – it's hard to separate your actual project code from other projects and from the framework code.

So last week I built a new New Project Wizard that creates projects in a much more self contained fashion. When you create a new project with Web Connection now you create a new Project folder that contains both the Code, and Web Folders underneath the new project root. The code folder contains only your own code and Web Connection is referenced via SET PATH. The new Wizard generates the necessary pathing and a shortcut to make it easy to get set up properly so that your project just builds. This was tricky to get right, and in fact has been the reason I did not want to actually implement this in the past.

The New Project Wizard has been whittled down to 2 simple  pages now:

NewProject1

NewProject2

Gone are are most of the choices for file locations and IIS configurations beyond picking the site so this is much less intimidating for new users.

This is possible because with the new project structure we can create new projects with a known location and all of the paths for the Web site and configuration file locations can be determined based on the project path.

The default configuration settings are now also using relative paths for most paths, so that configurations are much more portable. In many cases you can just push up your project folder to a server and other than setting up the IIS ApplicationPool and Virtual directory the application should just work.

Here's what the project layout looks like:

Projectdisklayout

There's a project root folder (DevDemo) and deploy and web folders. Deploy is the 'code' or 'binary' (or both) folder for your project – this is where your application starts from. The web folder holds all the Web content and that's what's mapped in IIS to the Web site or Virtual directory.

The deploy folder contains your server and process classes, the project file, the compiled EXE and INI configuration file. The project wizard also generates a config.fpw file that has paths pointing back to the Web Connection install folder (or whereever Console.exe was run from) and a desktop shortcut for the project was also created that points at that web.config file.There's also a SetPaths.prg which does the same thing if you just change path to the folder as a lot of you do.

Also notice that the temp path has now moved into the Deploy folder by default. This is a known application in this project – for the executable it's just ".\temp", for the Web app it's "~/../deploy/temp" – a relative location. Again if you move this app to the server with this same folder structure, things just work without reconfiguring paths in your config files.

Note that this is just a new default setup. Nothing has changed in the way you configure Web Connection itself meaning that if you still want to put your Web folder into inetpub/wwwroot you can do that as well. You can move the code folder, web folder – anything at all. You just have to manually adjust the configuration settings in IIS and your configuration files to match.

Better late than never right? :-)

Templating and Scripting Improvements

When I released Web Connection 5.50 a few years back, that release paid lip service to the fact that the Web Control framework that was introduced in Web Connection 5.0 didn't really take off. While there are a number of you that are using that framework heavily most developers are continuing to use the templating and scripting features. 5.50 introduced the wwHtmlHelpers set of helpers that provide a lot of the functionality that many of the Web Controls provide for the simpler scripting and templating applications.

For Version 6.0 I plan on adding a couple of important features to templates and scripts:

  • Integrated Partial Rendering
  • Support for Layout/Master Pages

If you've used any sort of server side MVC (Model/View/Controller) framework before you've probably noticed that Web Connection's ProcessClass/Script/Template mechanism essentially was an early MVC implementation. However, most modern MVC implementations support nested views in a variety of ways. Having an easy way to pull in nested content or Partials makes life a lot easier. Web Connection has always had support for this but the syntax for it was nasty and it's very difficult to discover.

Likewise there's the concept of 'Layout' pages that are sort of a master view container. Most applications have a 'base' layout into which other content is added. So you have a Layout page into which you render you actual page content. The page content then in turn can have Partials to render additional reusable components into the page.

In the last couple of days I implemented both Partials and Layout pages for both the template and script engines in Web Connection.

Here's how this will work.

Templates

Web Connection templates are typically rendered with Response.ExpandTemplate() or by using .wc pages through the default script map handler. Templates are basically evaluation only templates that are loaded and executed by parsing the document and replacing FoxPro code expressions and script blocks. Templates only support self-contained code blocks, but don't support structured code that mixes HTML and code.

The new Partial and Layout features fit well with templates however since these operations are basically just expressions calling out to other templates. Here's an example of a set of templates that can interact with each other:

<% Layout="~/LayoutPage.wc" %>
<div>

<h3>This is my rendered content</h3>
<p>
    Time is: <%= DateTime() %>
</p> <%= RenderPartial("~/PartialPage2.wc") %> <%= RenderPartial("~/PartialPage2.wc") %> </div>

RenderPartial() delegates out to another script page, which simply contains more script content just like this page. The partial page in turn can contain additionally nested content.

The syntax is:

<%= RenderPartial("~/PartialPage.wc") %>

and has to be used with this exact spacing and syntax in order to work properly. The ~/ denotes the root of the Site/Virtual so this expects a PartialPage.wc to exist at the root of the site. The ~\ syntax is required!

This content on this page is meant to be rendered into a Layout page. You'll notice that this page is really just an HTML fragment, not a full document. There's no <html> or <body> tag – that's meant to be provided by the layout page specified by this syntax:

<% Layout="~/LayoutPage.wc" %>

Again the ~/ is required and points at the root of the site/virtual.

The actual layout page in turn looks like this:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title></title>
</head>
<body>
    <h1>This is my LAYOUT PAGE Header</h1>

    <%= RenderContent() %>

    <hr />
    LAYOUT PAGE Footer
</body>
</html>

This page has the HTML header and it pulls in the  content from the previous page we looked at via:

<%= RenderContent() %>

Layout pages are great for creating a basic shell of a layout – that includes the HTML header and other content that ends up on just about every page of your application. It's great for pulling in css and scripts and base layout for pages.

Scripts

Web Connection scripts are typically called using Response.ExpandScript() or by using scripts on disk with a .wcs extension. Scripts are parsed into full PRG based programs that are executed as compiled FoxPro code. Scripts support most of the same features that templates support, but additionally can also run structured statements that mix HTML and code. Because scripts are compiled FoxPro, a compilation step is required by Web Connection and it's not quite as straight forward updating script files if multiple instances are running and have the compiled FXP files locked.

The new Partial and Layout rendering in scripts uses the same syntax I showed with templates.

<% Layout="/wconnect/weblog/LayoutPage.wcs" %> 
<div>

<h3>This is my CONTENT PAGE</h3>
<p>
    Time is: <%= DateTime() %>
</p>
   
<% for x = 1 to 10 %>
    <h3>Partial Content below</h3> 
    <hr />
    <%= RenderPartial("~/PartialPage.wcs") %> 
<% endfor %> 
     
</div>  

In this example a partial is rendered 10 times in a loop which demonstrates the structured mixing of code and HTML that you can't do in templates.

As in the template example, the layout page has to include a call to RenderContent() to force the content page that referenced the Layout page to be rendered.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title></title>
</head>
<body>
    <h1>This is my LAYOUT PAGE Header</h1>

    <%= RenderContent() %>

    <hr />
    LAYOUT PAGE Footer

</body>
</html>

Layout Sections

You can also create layout sections in the master that are 'filled' from the content page. A typical example for this is when you want to add scripts or css stylesheets to a page from the Content page. Or if you have some other area of the page that you want to fill with content from the content page. Essentially this let's you embed content outside of the content rendering area from the content page.

Start by setting up your layout page:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <%= RenderSection("headers") %>
</head>
<body>    
    <h1>HEADER FROM LAYOUT PAGE</h1>    
    
    <%= RenderContent() %>    

    <hr />    
    LAYOUT FOOTER
    
     <%= RenderSection("scripts") %>     
</body>
</html>

This designates two sections for headers and scripts. The string is a label of your choice – you can have as many section as you like.

In the content page you can then create sections that will essentially fill these sections in the layout page:

<% Layout="/wconnect/weblog/LayoutPage.wcs" %> 

<% section="headers" %>
     <title>My Content Page</title>
<link href="css/MyPage.css" rel
="stylesheet" /> <% endsection %> <div> <h3>This is my CONTENT PAGE</h3> <p> Time is: <%= DateTime() %> </p> <h3>Partial Content below (10x)</h3> <hr /> <% for x = 1 to 10 %> <%= RenderPartial("~/PartialPage.wcs") %> <% endfor %> <hr /> </div> <% section="scripts" %> <script src="bower_components/lodash/lodash.js"></script> <% endsection %>

You can also use script expressions (but not code blocks) inside of sections. Sections can fill parts of a document such as a user status widget perhaps that displays user login information. Lots of options here.

At this point sections only work in scripts, not templates. It took some extremely ugly recursive and generator style code to make this work, and I'm not sure at this time whether it'll be possible to make this work with the current template engine. But then again, if you are doing stuff complex enough to require sections and layout pages you probably should be using scripts anyway.

Finicky Tags

In order to keep the parsing overhead to a minimum when dealing with these new 'directive' tags, the tag names have to match exactly – including spacing inside of the tags. This allows quick searches for these tags in the page rather than traversing the document to find them which dramatically speeds up the performance. All of the new tags are not directly executed by FoxPro – the various expressions were chosen for easy to remember and logical names in order to make it easy to use them in the page. They are translated into actual executable script code when the page is parsed – they expand into something different. Sections in particular turn into some real ugly code with literal strings, but that's the price for this level of complexity. As before you'll be able to see the underlying code that makes the pages actually run in the codebehind PRG files that are generated.

Partials and Layout should make building complex apps quite a little less repetitive as you can do away with a lot of boiler plate code and simply put it into a Layout page.

Although you could do at least partials before, these new features standardize the behavior significantly and make it more transparent.

Improved support for REST/API Services and Mobile Web

Mobile applications are becoming more important by the day as more people are relying on mobile devices of various sizes to access their data and applications. Building mobile applications is quite different than building 'classic' Web applications that they need simpler user interfaces and handle display more effectively. As a result we've seen a big move towards client centric applications that use HTML/CSS and JavaScript to handle the user interface, using the server to serve static resources and server based services to feed data.

In the last releases of Web Connection (5.65 and later) there have been steady improvements to provide for that REST/API Service layer via the new wwRestProcess class that can facilitate consuming and serving of JSON data from your FoxPro business objects. There are a number of additional improvements that can be made here in terms of performance and simplifying the interface even more. This will be mostly an incremental change but it's an important piece in the overall Web Connection 6 package.

The other piece of that is the client side. This isn't directly related to Web Connection other than providing improved examples that demonstrate how to build these type of applications that use an all browser based client side user interface and that works well both on mobile devices as well as full sized applications. The Music Store sample was the start of that. Again some of the functionality it there already but it's part of the packaging for 6.0

Updated Samples And Default Templates

As mentioned at the beginning of this post, the main reason to consider all of this work is to make it easier to get started with and set up new projects that are ready to go. This is for my own work as I still work with a host of customers both to showcase specific functionality as well as creating actual new projects.

The goal is to build new templates that default to using Bootstrap for user interface layout. I plan on providing a default theme based on bootstrap that provides a basic customized bootstrap setup that can be easily customized (or go back to stock bootstrap if you choose).

This seems like a minor thing but there are a number issues related to this. All of the samples in Web Connection are based on old custom CSS that don't use any special CSS or other dependencies. While that worked in the old days, it's now to really look like crap and dated. This involves going through all the existing samples, all the existing admin features etc. and starter templates. This will take some time unfortunately and most of this is tedious boring work :-)

Along the same lines a lot of the simple feature samples in Web Connection need to be updated – some just need a visual refresh, others need a complete logic rebuild to work the way you'd do things today. Some of these samples are nearly 20 years old, so yes there's some room for improvement I suppose.

Likewise the Web Control Framework functionality uses all the old styling, so I'm not sure that I will be updating these controls to use the new layouts. THere's too much baggage there to make that work – that might be a 6.1 feature.

Dropping Visual FoxPro 8 Support

One thing that has made maintenance of Web Connection more problematic over the years is support for Visual FoxPro 8. With version 6 I will be dropping support for VFP8. There's no good reason for anybody to be running VFP8 these days, given that VFP8 and VFP9 have very, very little in the way of feature differences. This was not the case for earlier versions of 7 and 6, but those haven't been supported for some time.

I know some of you will howl at this (I still get frequent requests for 'does this tool work with VFP 6?'). There's no excuse to be running anything but VFP 9 if you are running FoxPro applications. Maintaining VFP 8 support has been an issue as it requires extra testing, extra files to distribute to keep in sync and having to remember what we can and can't use in the latest versions of our tools.

Other Odds and Ends

There are a few other things that need attention as well. The authentication features in Web Connection are reasonably functional, but hooking them up currently is both badly documented and could be easier by providing a few additional hooks. It also would be nice to set up a default login page template that is easy to customize as part of a new application – this way you can simply modify a Web page to get the look that you want without having to override a bunch of process class methods as you have to do now to essentially do the same thing.

The various Wizards also will need some updating to reflect the changes to the new templates. For the most part the updates will be minor except for the new Process Wizard which will also need a similar overhaul as the new project Wizard does.

What else?

If you're using Web Connection, what's on your wishlist? What features do you want to see that I've missed that are within the realm of the core framework? I know there are tons of request to build this or that vertical functionality but I can tell that's not going to happen from me – that's what you guys can build ontop of Web Connection :-) But if you have core framework features or utilities that you think would make your life easier, I'd love to hear about it.


Single File Image Uploads with plUpload and Web Connection


No comments
Monday, April 20, 2015, 10:31:00 AM

I get a lot of questions about uploading files in regards to Web Connection. Uploading is a thorny problem especially if you need to upload multiple files or if you want to use asynchronous uploads using AJAX rather than submitting files through standard <input type=”file”> elements which suffer from many problems. Using asynchronous upload components allow getting around the 16 meg string limit in FoxPro and Web Connection and they allow you to do the uploads in the background while you can display progress information and keep your UI otherwise active.

Web Connection has had support for plUpload for some time now. There’s a plUpload server handler that allows you to use the plUpload Queue component which is a visual UI component that’s provided by the plUpload library. Web Connection provides a generic plUploadHandler class that can be used on the server side to capture uploaded files.

The examples show how to do this using the plUpload Queue component which is bulky UI control. Here’s what the full UI that you can drop on any form looks like:

You can check out this example and the following single file upload example on the Web Connection Samples page:

http://west-wind.com/wconnect/webcontrols/plUploadSample.htm

Single File Uploads

The full component is useful for batch uploads, but sometimes you don’t want or need this rather bulky UI to handle file uploads. For example I recently needed to add image uploads to the West Wind Message Board and in order to do this I really wanted a much simpler UI that simply triggers the image upload from a button:

ImageUpload

There’s a simple button that when clicked allows you to pick a single file (although you can also have it pick multiples) and then immediately starts uploading the image to the server. When done the image URL returned is then embedded into the user’s message at the current cursor position.

Let’s see how we can do this using the simpler plUpload base API.

Using the base plUpload API

plUpload includes a number of different upload components but behind it all sits a base uploader API which doesn’t have any UI components associated with it. For my image uploader I don’t want any extraneous UI – I only want to upload the file and provide some basic progress information.

To do this we can use the core plUpload API. Here’s how this works.

Let’s start with the Add Image Dialog HTML which displays the modal dialog above:

<div id="InsertImageDialog" class="dialog" style="display: none; min-width: 320px; width: 80%"> <div class="dialog-header">Insert Image</div> <div class="dialog-content"> <label>Web Image Url: <small>(external source ie.Flickr,Imgur, DropBox etc.)</small></label> <input type="url" id="txtImageLink" style="width: 98%" /> <button type="button" id="btnImageSelection" class="bigbutton" style="margin-top: 5px;">Insert Web Image</button> <div id="container" style="margin-top: 30px;"> <label>Upload an Image:</label> <button id="btnUploadFile" onclick="return false;" class="bigbutton">Select file to Upload</button> </div> </div> </div>

Next we need to add the relevant plUpload script to the page.

<script src="scripts/plUpload/plupload.full.min.js"></script>    

Then we need to configure the plUpload Javascript code which can live in a script tag on the bottom of the page or as is the case here inside of a page level JavaScript file.

var uploader = new plupload.Uploader({ browse_button: 'btnUploadFile', // you can pass in id...
url: "ImageUpload.wwt",




runtimes: 'html5,flash,html4',
container: document.getElementById('container'), // ... or DOM Element itself url: "ImageUpload.wwt", chunk_size: '64kb', // Resize (downsize really) images on clientside if we can resize: { width: 1024, height: 768, quality: 85 }, filters: { max_file_size: '4mb', mime_types: [ { title: "Image files", extensions: "jpg,gif,png" } ] }, // Flash settings flash_swf_url: 'scripts/plupload/js/Moxie.swf', init: { PostInit: function () { }, FilesAdded: function (up, files) { // start uploading - we only accept one file uploader.start(); }, UploadProgress: function (up, file) { showStatus("Upload Progress: " + file.percent + "% complete",3000); }, Error: function (up, err) { showStatus("Upload failed: " + err.code + ": " + err.message); },

FileUploaded: function(up, file, response) { uploader.removeFile(file); var imageUrl = response.response; if (imageUrl) { markupSelection("<<img src=\"" + imageUrl + "\" />>"); $("#InsertImageDialog").modalDialog("hide"); } }, UploadComplete: function(up, files) { } } }); uploader.init();

The plUpload script code is pretty descriptive so not much explanation is needed. The most important propertie here is the browse_button property which is an id pointing at a button or link that when clicked triggers the image upload and the url property which points at the server target URL that will response to the plUpload file chunks that are sent to the server.

The interesting stuff happens in the event handlers.

Since I’m only dealing with a single file selection, I can use the FilesAdded event to immediately start the file upload under program control. This event fires whenever you select one or more files, and if you use a single button it makes sense to just kick off the upload without further user confirmation.

For progress and error information I use the ww.jquery.js showStatus()  function which is a quick and easy way to display status information on a status bar on the bottom of the form.

The most important piece though is the FileUploaded event which is used to actually confirm the file upload and capture the generated filename that the server saved. The function receives the upload component, the individual file object and and HTTP response object. The main thing we’re interested in the response property of the response object which provides a fully qualified image URL that points at the image that the server saved. This value is captured, an <img> tag created and then pasted into the text control at the current cursor position.

Handling the Image plUpload on the Server Side

As mentioned earlier Web Connection includes a plUploadHandler class that makes it pretty straight forward to handle uploads. The class basically works in conjunction with a wwProcess class and handles the plUpload file transfer chunks and puts the files together on the server in small chunks. This makes it possible for example to post files larger than 16 megs to the server as well as the file is sent in small chunks that are progressively appended to a file on the server.

To implement the server side you’ll create two methods:

  • A standard wwProcess EndPoint Method
  • An OnUploadComplete Event that is fired when the upload is complete

The first is the actual endpoint method that is referenced by the plUpload component. If you look back on the JavaScript configuration you see that it points at ImageUpload.wwt which translates to the following method in my wwThreads Process class:

FUNCTION ImageUpload()
 
*** Make sure plUploadHandler is loaded
SET PROCEDURE TO plUploadHandler ADDITIVE 
 
LOCAL loUpload as plUploadHandler
loUpload = CREATEOBJECT("plUploadHandler")
 
*** Upload to temp folder
loUpload.cUploadPath = ADDBS(THIS.oConfig.cHtmlPagePath) + "temp"
IF(!IsDir(loUpload.cUploadPath))
    MD (loUpload.cUploadPath)
ENDIF    
 
BINDEVENT(loUpload,"OnUploadComplete",THIS,"OnImageUploadComplete",1)
 
*** Constrain the extensions allowed on the server
loUpload.cAllowedExtensions = "jpg,jpeg,png,gif"
 
*** Process the file or chunk
loUpload.ProcessRequest()
 
ENDFUNC

This code creates a plUploadHandler component and tells it to store files uploaded in a temp subfolder. This is a temporary folder where files are uploaded to and then discarded before getting copied to a permanent location.

We then need to map the OnUploadComplete event by mapping it to another function OnImageUploadComplete() that will do the post processing and moving of our file. Finally we can specify the file extensions that are allowed for the image and then we’re ready to Process the current request with loUpload.ProcessRequest().

This method is called multiple times for each file uploaded. Files are uploaded in chunks so a 2meg file is broken into many smaller chunks that are sent and processed one at a time. When a file is completed the OnImageUploadComplete event is fired. Here’s what that looks like:

FUNCTION OnImageUploadComplete(lcFilename, loUpload)
LOCAL lcUrl, lcFile
 
lcUrl = this.ResolveUrl("~/temp/" + lcFileName)
 
*** Resize the image
lcFile = ADDBS(loUpload.cUploadPath) + lcFileName
 
*** Delete expired files - only for 10 minutes
DeleteFiles(ADDBS(JUSTPATH(lcFile)) + "*.*",600)
 
lcNewFile =  SYS(2015) + "." + JUSTEXT(lcFile)
lcNewPath = this.cHtmlPagePath + "PostImages\" + TRANSFORM(YEAR(DATETIME())) + "\"
IF !ISDIR(lcNewPath)
    MD (lcNewPath)
ENDIF
 
lcFileName = lcNewPath + lcNewFile
*ResizeImage(lcFile,lcFileName,1024,768)
COPY FILE (lcFile) TO (lcFileName)
DELETE FILE (lcFile)
 
lcUrl = this.ResolveUrl("~/PostImages/" + + TRANSFORM(YEAR(DATETIME())) + "/" + lcNewFile)
lcUrl = "http://" + Request.GetServerName() + lcUrl
 
*** Write out the response for the client (if any)
*** In this case the URL to the uploaded image
loUpload.WriteCompletionResponse(lcUrl)
 
ENDFUNC

The handler is passed the original file name (just the filename without a path) and the loUpload component.

For the message board I want to capture the file uploaded, rename it with a random name and then move it a more permanent folder – in this case PostImages/YEAR/. Once the file has been copied the original uploaded file in the temp folder can be deleted.

Finally the OnImageUploadComplete() method has to return the new URL to the client so that the client can link to the image. If you recall in the JavaScript we were sent a response object with a response property. The response property holds whatever we write out into the WriteCompletionResponse(). The most useful thing here almost always is the full URL to the resource that was uploaded if the client allows using that resource in some way. In the client application the URL is used to embed an image link into the user’s message text.

Quite a bit of Code, but easy to do

What’s described above is the entire process involved, which is not entirely trivial. There are a fair amount of moving parts in this code both on the client and on the server, but between plUpload and Web Connection’s plUploadHandler the actual code you have to write is pretty minimal. Most of what you see above is boiler-plate code that you can cut and paste into place and then only customize the actual result handlers when uploads are complete both on the server and client. Although it’s a fair bit of code overall the non boiler-plate code is minimal.


Access-Control-Allow-Origin Header in West Wind Web Connection


No comments
Thursday, April 2, 2015, 1:44:00 AM

Over the last few weeks I’ve gotten a ton of questions for applications that need to support cross-site script access in relation to West Wind Web Connection. This typically a requirement for service applications that are accessed from mobile devices or other devices that don’t load the actual site content from the origin Web site. Typically this means you are building a HTTP or REST service that communicates with the client via JSON.

Web Connection 5.70 and later has introduced a host of new features – including the powerful and easy to use wwRestProcess Process class – to make it super easy to create HTTP based JSON services, and it looks like quite a number of users are taking advantage of these new features to integrate their Web Connection backends with various mobile device applications.

So not surprising that these requests for Access-Control-Allow-Origin header questions are coming up more frequently now. When you’re building mobile or distributed services that are called from multiple origin sites or devices you will have to address the CORS issues to enable the cross-domain Web service access.

Luckily making that happen is pretty easy.

What is CORS?

By default Web browsers (and Web browser controls) allow access of data retrieved via AJAX/XHR requests only to come from the same origin server as the originating request. This means that if I’m running on http://west-wind.com/ all of my HTTP requests using XHR have to come from the same west-wind.com domain.

The reason for this restrictive behavior is to prevent cross-site scripting attacks – if your site’s has user input that can be corrupted to include <script> tags that execute it’s possible for an attacker to potentially send sensitive information, like cookie data or anything else contained on a page to another server. The XHR restrictions are in place to supposedly prevent this.

CORS is a simple protocol allows a way to get around this by a server specifying an Access-Control-Allow-Origin header on the server to specify which domains (or all domains) that are allowed access to the site.

Setting the Access-Control-Allow-Origin Header in Web Connection

It’s easy to set this header in Web Connection:

Response.AppendHeader("Access-Control-Allow-Origin","*")

on any request that happens. Here I’m allowing any source domain to access the site. You can also specify specific domains as a comma delimited list in the format of http://west-wind.com,http://weblog.west-wind.com for example.

Alternately you can also add this globally for an entire process class by setting the value in the OnProcessInit() method:

FUNCTION OnProcessInit
 
Response.GzipCompression = .T.
Request.lUtf8Encoding = .T.
 
Response.AppendHeader("Access-Control-Allow-Origin","*")
 
RETURN .T.

Finally you can also use IIS to set this header globally for the entire site assuming your using IIS 7 or later:

<configuration>
<
system.webServer> <httpProtocol> <customHeaders> <clear/> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> </system.webServer> </configuration>

And voila that’s all it should take to make your site able to serve remote requests from mobile devices or from pages served off a remote domain.

Things that make you go Hmmm.

Alas, I’ve never really understood the value of this CORS implementation. Cross site scripting is a big problem no doubt, but I don’t really see how CORS helps this if the *server* can specify the domain it wants to allow. If somebody is attacking your site and trying to steal information and they want to use XHR to post the data somewhere it’s trivial for the user to set up a server that adds the above header. Nothing’s been prevented at that point. This just seems like yet another useless security protocol that makes things a little bit more inconvenient for developers, but adds very little of security safeguards. Just like JSONP and hidden iFrames have been able to get around XHR cross-site limitations for years, CORS doesn’t really provide anything that prevents that.

The point of CORS is more to let the server decide of whether it wants to serve content to the client, but CORS is often billed as cross site scripting prevention which it really isn’t.


Use wwJsonSerializer to create Sample Data


No comments
Thursday, December 18, 2014, 8:43:55 PM

I frequently create demos for various applications and components and quite frequently I create objects on the fly using EMPTY objects using ADDPROPERTY(). The code that does is a bit verbose and looks something like this:

*** Note objects are serialized as lower case
loCustomer = CREATEOBJECT("EMPTY")
 
*** Recommend you assign your own ids for easier querying
ADDPROPERTY(loCustomer,"_id",SYS(2015))
ADDPROPERTY(loCustomer,"FirstName","Markus")
ADDPROPERTY(loCustomer,"LastName","Egger")
ADDPROPERTY(loCustomer,"Company","EPS Software")
ADDPROPERTY(loCustomer,"Entered", DATETIME())
 
loAddress = CREATEOBJECT("EMPTY")
ADDPROPERTY(loAddress,"Street","32 Kaiea")
ADDPROPERTY(loAddress,"City","Paia")
ADDPROPERTY(loCustomer,"Address",loAddress)
 
loOrders = CREATEOBJECT("Collection")
ADDPROPERTY(loCustomer,"Orders",loOrders)
 
loOrder = CREATEOBJECT("Empty")
ADDPROPERTY(loOrder,"Date",DATETIME())
ADDPROPERTY(loOrder,"OrderId",SUBSTR(SYS(2015),2))
ADDPROPERTY(loOrder,"OrderTotal",120.00)
loOrders.Add(loOrder)
 
loOrder = CREATEOBJECT("Empty")
ADDPROPERTY(loOrder,"Date",DATETIME())
ADDPROPERTY(loOrder,"OrderId",SUBSTR(SYS(2015),2))
ADDPROPERTY(loOrder,"OrderTotal",120.00)
loOrders.Add(loOrder)

While this works, it’s pretty verbose and a little hard to read with all the nesting and relationships. It’s still a lot less code than explicitly creating a class and then assigning properties but even so, it’s hard to see the relationship between the nested objects. The more nesting there is the more nasty this sort of code gets regardless of whether you use AddProperty() or CREATEOBJECT() with assignments.

Given that Json Serialization is readily available now, it’s actually pretty easy to replace this code with something that’s a little easier to visualize.

Take the following code which produces a very similar object by using a JSON string serialized to an object:

DO wwJsonSerializer
 
*** Create a sample object
TEXT TO lcJson TEXTMERGE NOSHOW
{
_id: "<< loMongo.GenerateId() >>", FirstName: "Rick", LastName: "Strahl", Company: "West Wind", Entered: "<<TTOC(DateTime(),3)>>Z", Address: { Street: "32 Kaiea", City: "Paia" }, Orders: [ { OrderId: "ar431211", OrderTotal: 125.44 }, { OrderId: "fe134341", OrderTotal: 95.12 } ] }
ENDTEXT
loSer = CREATEOBJECT("wwJsonSerializer") loCustomer = loSer.DeserializeJson(lcJson) ? loCustomer.LastName ? loCustomer.Entered ? loCustomer.Address.City loOrder = loCustomer.Orders[1] ? loOrder.OrderTotal

The data is created in string format and embedded inside of a TEXTMERGE statement to produce a JSON string. The JSON string is then sent through a JSON deserializer which in turn produces a FoxPro object (created from EMPTY objects and Collections).

Note that you can also inject data into this structure because of the TEXTMERGE clause that allows merging FoxPro expressions. again this works well to essentially ‘script’ your object definition.

Caveats

I use this mostly for demos and other scenarios where I need to create some static data quickly and for that it works very well because I can control the content exactly with the values I push into the object.

However, keep in mind that JSON requires data to be encoded to a certain degree. Strings need to escape extended characters, quotes and a host of other things. If you’re using Web Connection you can use the JsonString() and JsonDate() functions in the wwUtils library.

TEXT TO lcJson TEXTMERGE NOSHOW
{   
_id: "<< loMongo.GenerateId()" >>, FirstName: << JsonString(lcFirstName) >>, LastName: <<JsonString(lcLastName) >>, Entered: << JsonDate(DateTime()>>,
… }

Although this gets a little more verbose as well, it’s still easier to read than the pure FoxPro object syntax.

Visualize

Anyway – something to keep in mind. When you need more expressive syntax to construct objects – and especially complex objects – a JSON Serializer might just do the trick and provide you complex structure in a more visual way.


Visual Studio Community Edition–Visual Studio for Web Connection


No comments
Tuesday, December 16, 2014, 3:04:06 AM

A couple of months ago Microsoft announced the release of a new Visual Studio Community version to replace all the Express versions. As you probably know the Express versions of Visual Studio were severely trimmed down and specialized versions of Visual Studio  – essentially a core version of Visual Studio with only certain packages enabled. While these versions were functional and certainly provided a lot of value, each of the Express versions had some major features missing. The Visual Studio Community edition is a full version of Visual Studio Professional and is available to most non-large enterprise developers free of charge.

Web Express Limitations for Web Connection

In the past I’ve recommended using Visual Studio Express for Web, especially for developers that were taking advantage of the Web Control Framework. It provides the excellent HTML, CSS and JavaScript editors, and explicit support for WebForms which is what the Web Connection Web Control Framework uses to provide it’s designer support in Visual Studio. Express version had some limitations that definitely impacted Web Connection, namely lack of support for the Web Connection Add-in which provides the ability to bring up Visual FoxPro and your favorite browser directly from a Web Connection page.

With the intro of Visual Studio Community edition, that limitation – and many others are gone.

Visual Studio Community Edition is Visual Studio Professional

The new Community edition is essentially a free version of Visual Studio Professional for most individuals and small organizations. It provides the full feature set of Visual Studio Pro which includes the support for plug-ins, multiple projects and the whole gamut of projects that Visual Studio supports. There are no limitations or disablements – it’s a full version of Visual Studio Pro.

If you are a Web Connection developer and you’ve been using the Web Express Edition you should definitely go ahead and download and install Community Edition to get support for the Web Connection Add-in. This version also allows you to open multiple projects in the same solution so you can have multiple Web sites open in one project, as well as open the Web Connection Web Controls design time control project for creating your own components.

You can download the Community Edition from the Visual Studio Download site

Who can use the Community Edition

As mentioned the Community Edition is free to most small to medium sized organizations. With a few exceptions of larger organizations you can use Visual Studio Community edition for free. Here’s what the Community Edition site shows for who can use the Community Edition:

  • Any individual developer can use Visual Studio Community to create their own free or paid apps.

Here’s how Visual Studio Community can be used in organizations:

  • An unlimited number of users within an organization can use Visual Studio Community for the following scenarios: in a classroom learning environment, for academic research, or for contributing to open source projects.
  • For all other usage scenarios: In non-enterprise organizations, up to 5 users can use Visual Studio Community. In enterprise organizations (meaning those with >250 PCs or > $1 Million US Dollars in annual revenue), no use is permitted beyond the open source, academic research, and classroom learning environment scenarios described above.

What’s the Downside?

I think that the Community Edition is a huge step in the right direction for Microsoft. Developer tools are becoming more and more of a commodity item and are used more to sell a platform than anything else, and this edition at least provides a full featured development tool for those that want to work with Microsoft technologies. The fact that the HTML, CSS and JavaScript editors are top notch for any kind of Web development is an additional bonus.

The one downside is that Visual Studio is large. It uses a fair amount of system resources so you need a reasonably fast machine to run it. However, it’s nothing like older versions of Visual Studio that were. If it’s been few years since you’ve last tried Visual Studio you really should give it another shot as performance and especially editor performance and load times have improved significantly.

As a Web Connection Developer Should you care?

If you’re using the Web Control Framework you will definitely want to use the Community Edition as it gives you full support for the designer integration and the Web Connection Add-in to load your WCF pages. If you’re using any other edition, go make the jump – it’s totally worth it.

But even if you’re not using the Web Control Framework that provides integration with the WebForms designer, there are many features in Visual Studio’s HTML, CSS and JavaScript editors that are top notch if not the best compared to just about anything out there. The only other tool I would put even close in comparison is Jetbrains WebStorm which is an excellent tool for pure HTML5 and JavaScript (including Node) based applications.

If you are using any version of Visual Studio 2012 or 2013 (including the Community Edition) make sure you install Web Essentials which provides additional HTML, CSS and JavaScript features including support for LESS, SASS, Zen Coding support (very efficient HTML shortcut template language), automatic JS and CSS minification and much more. As the name implies it’s essential and I don’t consider this an optional add-in.

Regardless of whether you do Web Connection development, or raw HTML coding, here are some features in Visual Studio that I wouldn’t want to live without:

  • HTML Element and Attribute IntelliSense
  • HTML and CSS Formatting (as you type or explicit)
  • CSS IntelliSense (CSS properties, reference props in docs)
  • F12 Navigation to CSS classes
  • Automatic LESS parsing and saving
  • Excellent JavaScript IntelliSense (especially with _references.js file)
  • Basic JavaScript Refactoring

Web Essentials:

  • Zen Code
  • Script and CSS Compression
  • HTML Color Picker, Darken/Lighten
  • CSS Browser Support Preview
  • CSS Vendor Prefix injection
  • Font and Image Previewers

There’s lots more but those are some of the highlights for me. If you haven’t tried Visual Studio in a long while or where put off by the pricing of the full versions, give the community edition a try…


SSL Vulnerabilities and West Wind Products


No comments
Wednesday, October 15, 2014, 6:32:36 PM

Several people have sent me frantic email messages today in light of the latest POODLE SSL vulnerability and whether it affects West Wind products.

The vulnerability essentially deals with older SSL protocol bugs and the concern is especially on the server that protocol fallback from TLS to SSL can cause problems. Current protocols are TLS vs. the older SSL technology. Only the older SSL stacks are vulnerable. This is most critical on Web Servers, and you'll want to disable the older SSL protocols on IIS and leave only TLS enabled. TLS is the newer standard that superseded  SSL, although we still talk about SSL certificates – most modern certificates are actually TLS certificates and have support for SSL for fallback validation.

You can check your server’s SSL certificate status here:
https://www.ssllabs.com/ssltest/index.html

You’ll want to see that all versions of SSL are either disabled or at the very least that the server doesn’t support protocol fallback to SSL protocols.

When I ran this for my site running Server 2008 (IIS 7.0) I found SSL3 enabled, but the server not supporting automatic protocol fallback which avoids the main POODLE issue. I’ve since disabled SSL V3 on the server. The good news is, that while the certificate isn’t using maximum strength, the server was not found to be vulnerable from this issue even with the default settings in place.

Disabling SSL on IIS

If you see that SSL versions are enabled even if you have TLS certificates (which just about all of them should be by now) you can disable specific protocols. This is done through registry keys, but be aware that this is global for the entire machine. You can disable/enable protocols for both the client and server.

Here's a KB article from Microsoft to check out that tells you how to disable older SSL protocols.
http://support.microsoft.com/kb/187498

SSL and WinInet

The West Wind Client Tools and specifically the wwHTTP class rely on the Windows WinInet library to provide HTTP services. The POODLE issue is much less critical for the client as the client is the actually attacking entity. But I double checked here on my local machine and I can see that WinInet uses TLS 1.2 on my Windows 8.1 when connecting to a certificate on the server.

Capturing an https:// session from Fiddler shows the following request header signature on the CONNECT call:

Version: 3.3 (TLS/1.2)
Random: 54 3F 15 D6 B5 3E 6B F5 AD 71 41 FB 4C 39 B9 30 C5 21 04 A4 76 7F 87 A5 1A BA D6 83 19 B3 10 3B
SessionID: empty

which suggests TLS/1.2 as the initial request sent.

I don’t know what older versions of Windows – XP in particular use – but I suspect even XP will be using TLS/1.0 at the very least at this point. Maybe somebody can check (Use Fiddler, disable Decrypt HTTPS connections option, then capture HTTPS requests and look at the raw request/response headers.

Nothing to see here

From the West Wind perspective this issue is not specific to the tools, but to the operating system. So make sure the latest patches are installed and that if you have to remove the server SSL certificates. Client software is not really at risk, since the attack vector is a receiving Web Server. Regardless even the client tools appear to be using the latest safer protocols and so any West Wind tools are good to go.


Figuring out Web Connection Versions


No comments
Friday, September 26, 2014, 7:40:23 PM

On various occasions I’ve gotten questions regarding how to figure out what version of Web Connection is running on the server. ‘Web Connection’ has several components so the version numbers may be separate and while it’s not absolutely critical that all components are on the same versions it’s a good idea to keep them in sync.

Connector Versions

Web Connection has three different connector versions:

  • Web Connection .NET Module (recommended for IIS 7 and later)
  • Web Connection ISAPI Module
  • Web Connection Apache Plug in

For most applications I’d recommend using the .NET module as it has the most features and is the most reliable, especially when running under COM since it has a high performance managed threadpool for managing the COM instances.

Each of these components has their own signature, but there are several ways you can check for versions:

  • Check the actual DLLs
  • Check the Status Form after a Request
  • Check the Admin ISAPI Admin Form

Check the actual DLLs

The DLLs are stamped with a Version attribute that indicates the actual version. To check the version you can right click the file and use Properties | File Details to get to the embedded resource data that’ll contain the version number. The .NET module and ISAPI modules are updated regularly. You can find the version numbers in either dll right clicking and using File | Details:

WebConnectionModuleFileVersion

Check Web Connection Status After a Request was fired

You can also check the version after at least one request was fired by using the Status form and then reviewing the request data.DllVersion

This is a good way to see if you’re getting the right DLL file. It’s also a good place to check the wcConfig path, which will tell you which configuration is being read which indirectly also tells you where the DLL lives (in the same folder for wc.dll or the bin folder for webconnectionmodule.dll).

Check the Web Connection Module Administration Page

The Web Connection Module page also shows the full version of the active DLL. The module screens looks slightly different depending which version of the DLL you are running – the .NET Module or the ISAPI component. But both contain the version number and status information. For the module it looks like this:

WebConnectionStatusVersion

For the ISAPI page the DLL version shows (ISAPI) in parenthesis instead.

FoxPro Server Version

Note that the Module page also displays the compiled server’s EXE version number. This is useful to let you know what version your own code is. Note this is only avaiable with the module and not with the ISAPI component.

FoxPro Code Versions

To figure out the Web Connection code versions you can easily check the version number in wconnect.h. The version number in there shows up like this:

#DEFINE WWVERSION "Version 5.69"
#DEFINE WWVERSION_NUMBER 5.69
#DEFINE WWVERSIONDATE "September 26th, 2014"

This is the only FoxPro code based version that Web Connection provides – individual components are not versioned directly, so if you make changes to any of the Web Connection code these version numbers may not be accurate.

Keeping Versions Straight

These days I get a lot of email in regards to old versions that still float around and lots of people are asking what versions they are running and how various components can interact and work with newer version framework code.

The short answer is that the update from .NET 4.x to 5.x is not major, but does require some changes. Updates from Version 5.0 to 5.50+ is almost seamless. There are very few changes on the latter upgrade path.

Smaller version changes are generally just small maintenance releases and I would recommend that from time to time you update to the latest version just to keep up to date. Newer versions tend to fix reported bugs, improve performance for some operations and are maintained for security and other critical changes that might be required. It’s probably not necessary to update to every new version that comes along but I suggest that at least once or twice a year you update your code to the latest. Partial version updates are free (5.50+ versions) and it’s a good idea to keep your code up to date to minimize future update pain. It’s easier to fix one or two minor items at a time, the 20.

I hope this addresses some of the questions that have come my way regarding versioning and on how to best keep your Web Connection version up to date.




© Rick Strahl, West Wind Technologies, 2004 - 2015