Rick Strahl's FoxPro and Web Connection Web Log

Outlook Email Automation–Watch out for Administrator Access

No comments
Sunday, November 29, 2015, 10:38:36 PM

I was looking into creating prefilled emails using Outlook Automation earlier today and ran into an unexpected snag. The requirement was to display a pre-filled email that contains recipient, subject, body and one or more attachments and then display the prefilled email in Outlook.

This is fairly straightforward to do using COM Automation:

    *** NOTE: This fails if you run as Administrator (rather than the active user)
    loOutlook = GETOBJECT(,"Outlook.Application")
IF VARTYPE(loOutlook) != "O"
   loOutlook = CREATEOBJECT("Outlook.Application")
IF VARTYPE(loOutlook) != "O"
   MESSAGEBOX("Couldn't create Outlook instance")
loItem = loOutlook.CreateItem(0)
loItem.Body = "Hello World"
loItem.Subject = "New Test Message"
*** Add you files to attach here

Note that Outlook – as most Office Applications – is a Singleton object that expects to run only one instance. So if Outlook is already running you can use CREATEOBJECT() to create a new instance. Instead, you have to attach to an already running instance using GETOBJECT().

Gotcha: Running as Administrator makes GETOBJECT() fail

I typically run FoxPro as an Administrator because I frequently build COM objects, which requires that you run as a full administrator in order to write COM registration to the registry.

However, when running the above code, it turns out the GETOBJECT() call to capture Outlook.Application fails. It works fine when you run as a non-admin user or even as an Admin user when User Account Control (UAC) is enabled on the machine and your account is effectively running as a non Admin account.

But when you explicitly run as an Administrator either using Run As Administrator when you start the app, or from ShortCut properties, or if you have UAC disabled on the machine, you'll find that the GETOBJECT() call fails with:

OLE error code 0x800401e3: Operation unavailable.

The reason for this is that when you run as Administrator you are actually running a different user account (Administrator – duh) and that account can't actually access the running instance of Outlook that is already running on your desktop. The only workaround I could find is to ensure both Outlook and your application run in the same execution context. So it works if you run your app without administrative rights, or if you run your app as administrator as well as Outlook.

As you might expect it took me a long time to figure out WTF was going on here. According to all examples I've seen Outlook should be accessible with GETOBJECT(). It wasn't until I tried using wwDotnetBridge and COM Interop doing the same thing with .NET and getting the same response in FoxPro but not in LinqPad when I realized it must have something to do with the actual runtime environment. Sure enough – once I started VFP without Run as Administrator option, the code works.

String Tokenizing based on StrExtract()

Thursday, November 19, 2015, 6:18:43 PM

I've been building a number of solutions lately that relied heavily on parsing text. One thing that seems to come up repeatedly is the need to split strings but making sure that certain string tokens are excluded. For example, a recent MarkDown parser I've built for Help Builder needs to make sure it first excludes all code snippets, then performs standard parsing then puts the code snippets back for custom parsing.

Another scenario is when Help Builder imports .NET classes and it has to deal with generic parameters. Typically parameters are parsed via commas to separate them, but .NET generics may add commas as part of generic parameter lists.

Both of those scenarios require that code be parsed by first pulling out a token from a string and replacing it with a placeholder, then performing some other operation and then putting the the original value back.

For me this has become common enough that I decided I could really use a couple helpers for this. Here are two functions that help with this:

*  TokenizeString
***  Function: Tokenizes a string based on an extraction string and
***            returns the tokens as a collection. 
***    Assume: Pass the source string by reference to update it
***            with token delimiters.
***            Extraction is done with case insensitivity
***      Pass:  @lcSource   -  Source string - pass by reference
***             lcStart     -  Extract start string
***             lcEnd       -  Extract End String
***             lcDelimiter -  Delimiter embedded into string
***                            #@# (default) produces:
***                            #@#<sequence Number>#@#   
***    Return: Collection of tokens
FUNCTION TokenizeString(lcSource,lcStart,lcEnd,lcDelimiter)
LOCAL loTokens, lcExtract
IF EMPTY(lcDelimiter)
   lcDelimiter = "#@#"
loTokens = CREATEOBJECT("Collection")
lnX = 1
    lcExtract = STREXTRACT(lcSource,lcStart,lcEnd,1,1+4)
    IF EMPTY(lcExtract)
    lcSource = STRTRAN(lcSource,lcExtract,lcDelimiter + TRANSFORM(lnx) + lcDelimiter)
    lnx = lnx + 1 
RETURN loTokens
*   TokenizeString
*  DetokenizeString
***  Function: Detokenizes an individual value of the string
***    Assume:
***      Pass:  lcString    - Value that contains a token
***             loTokens    - Collection of tokens
***             lcDelimiter - Delimiter for token id
***    Return: detokenized string or original value if no token
FUNCTION DetokenizeString(lcString,loTokens,lcDelimiter)
LOCAL lnId, loTokens as Collection
IF EMPTY(lcDelimiter)
  lcDelimiter = "#@#"
    lnId = VAL(STREXTRACT(lcString,lcDelimiter,lcDelimiter))
    IF lnId < 1
    lcString = STRTRAN(lcString,lcDelimiter + TRANSFORM(lnId) + lcDelimiter,loTokens.Item(lnId))
RETURN lcString
*   DetokenizeString

TokenizeString() basically picks out anything between one or more start and end delimiter and returns a collection of these values (tokens). If you pass the source string in by reference the source is modified to embed token place holders into the the passed string replacing the extracted values.

You can then use DetokenizeString() to detokenize either individual string values or the entire tokenized string.

This allows you to basically work on the string without the tokenized values contained in it which can be useful if the tokenized text requires separate processing or interferes with the string processing of the original string.

An Example – .NET Generic Parameter Parsing

Here's an example of the comma delimited list of parameters I mentioned above. Assume I have a list of comma delimited parameters that needs to be parsed:

DO wwutils
lcParameters = "IEnumerable<Field,bool> List, Field field, List<Field,int> fieldList"
? "Original: " 
? lcParameters
*** Creates tokens in the lcSource String and returns a collection of the 
*** tokens.
loTokens = TokenizeString(@lcParameters,"<",">")
? lcParameters
* IEnumerable#@#1#@# List, Field field, List#@#2#@# fieldList
FOR lnX = 1 TO loTokens.Count
   ? loTokens[lnX]
? "Tokenized string: " + lcParameters
? "Parsed parameters:"
*** Now parse the parameters
lnCount = ALINES(laParms,lcParameters,",")
FOR lnX = 1 TO lnCount
   *** Detokenize indvidual parameters
   laParms[lnX] = DetokenizeString(laParms[lnX],loTokens)
   ? laParms[lnX]
? "Detokenized String (should be same as original):"
*** or you can detokenize the entire string at once
? DetokenizeString(lcParameters,loTokens)

IEnumerable<Field,bool> List, Field field, List<Field,int> fieldList

Notice that this list contains generic parameters embedded in the < > brackets so I can't just run ALINES() on this list. The following code strips out the generic parameters first, then parses the list then adds the token back in. The Tokenization allows picking out a subset of substrings and replace them with tokens so additional parsing can be done without the noise of the generic parameters in brackets that would otherwise break the parse logic. This is quite common in text parsing where you often deal with patterns that you are matching – and trying to avoid edge cases where the pattern breaks down. This is where I've found tokenization super useful.

Specialized Use Cases

In Help Builder I have tons of use cases where this applies as documents are rendered: In parsing code snippets out of documents for parsing because the code snippets are rendered 'raw' while the rest of the document gets rendered as encoded context. Links that require special fixup before being embedded into the document – the tokenization allows easy capture of the links, replacing the the captured token value and writing it back out with a new value. In Web Connection the various template parsers do something very similar with expressions and code blocks that get pulled out of the document then injected back in later as expanded values.

There are lots of variations of how you can use these tokens effectively.

This isn't the sort of thing you run into all the time, but for me it's been surprisingly frequent that I've had to do stuff like this and while this isn't terribly difficult to do manually, it's very verbose code that's ugly to write as part of an application. These two functions greatly simplify application code as it's shrunk to a couple of simple helper functions.

Maybe some of you will find this useful though…

Conference Materials from Southwest Fox

No comments
Tuesday, October 20, 2015, 6:33:07 PM

It's been a fun, but very busy week for me at Southwest Fox last week with 2 very long days of Web Connection training and then 3 more days of sessions at Southwest Fox.

There seemed to be a lot more excitement than in past years around alternate technologies and Web functionality than I have seen in recent years which I find refreshing. This seemed especially related to client centric, JavaScript based front end applications and in particular Angular Js for building your front end application code. There also continues to be a lot of pent up demand for building mobile friendly Web applications and even though I only briefly showed the hybrid Cordova application that seemed to get a lot of people very excited about the possibilities.

Anyway, if you were attending the conference or not, here are the links the for the materials for the main Southwest Fox sessions which are hosted in BitBucket repositories.

Building Mobile Web Applications with AngularJs, Bootstrap and Web Connection

This session demonstrated how to build a mobile friendly Web application that can scale from desktop down to a mobile phone and work well in all display sizes by way of a sample AlbumViewer application. This application showcases a number of mobile features such as mobile first design, rearranging user interface elements depending on device width and using a pure client centric JavaScript application to drive the front end with AngularJS. Angular JS provides the modularization, two-way databinding and a number of support features for driving the entire user interface from browser. A Web Connection wwRestService based backend rounds out this example to provide the service JSON data that the front end consumes, retrieving FoxPro objects and cursors and serving them as JSON responses. Finally there's also an example, of the same mobile Web application ported to Cordova (with very minimal changes) and running as a native application on iOS using the Visual Studio Tools for Apache Cordova Cordova.

Online Sample Application

Source Code for the AlbumViewer and related Code Samples

Slides and Session Notes

Creating and Consuming Web Services with Visual FoxPro and .NET

This session demonstrated how to build SOAP 1.x based Web Services, using .NET ASMX services as the intermediary to both create Web Services and also consume them using .NET. For the server side the samples demonstrate how to use OleDb for direct data access to FoxPro data, as well as using MTDLL COM objects for calling FoxPro business logic to return objects for consumption by .NET code. Both styles rely on some connecting .NET code to provide the service front end. For the client side the examples, demonstrate importing a Web Service from WSDL and generating a .NET class, then using wwDotnetBridge to call the generated .NET proxy.

Conference White Paper

Sample Projects and Slides on BitBucket

Web Connection Training TimeTrakker Server Side MVC Sample

During the two day training we focused on server side application development during the first day of the training. The example is a small time tracking application that is mobile friendly using Responsive Design and takes advantage of some of the new features in Web Connection 6.0 including Layout Pages, partials and sections.

Time Trakker Web Connection 6.0 Server Sample Application

West Wind Web Store Discount for SW Fox Attendees

I also want to remind those of you that attended SW Fox that there's a 10% discount available on all West Wind products. You can use the discount code SWFOX_2015 on the shopping cart to apply the 10% discount. Please provide your SW Fox badge number with your purchase to qualify.

Next Year's Southwest Fox

Dates for next year's Southwest Fox conference were announced for late September next year. If you haven't come before it's a great place to see people catch up with new ideas on how you can extend the life of FoxPro just a little longer while at the same time gaining new skills. Mark your calendar.

Web Connection 6.0 Feature: New Project Server Configuration Script

No comments
Friday, October 9, 2015, 11:50:25 AM

In Web Connection 6.0 there's an updated project Wizard that creates a new 'project' for you that is self contained. Web Connection 6.0 projects copy all files into a single folder hierarchy where both the Web folder and the Deploy (code) folder are under the same root. The result of this structure is that we can now more easily move projects around, and just as importantly configure various folder paths using relative paths that won't have to change when a project is moved.

All of this makes it much easier to deploy projects by simply copying the entire structure to a new deployment location.  The idea is to make it easier to configure a Web Connection application on the server by automating the server configuration with a few simple steps that you can perform before you deploy your application on the server.

A Server Configuration Script

When a new project is generated Web Connection creates among other files a YourApp_ServerConfig.prg file. This plain FoxPro code file contains a small bit of code that uses server configuration code provided with Web Connection to:

  • Create a Virtual Directory for your Application
  • Add to an Application Pool (Web Connection by default)
  • Add Basic and Windows Authentication
  • Create Script Maps for your Application
  • Set Windows Permissions for your Application

Because the new project layout uses a known folder structure,  Web Connection can use relative paths to find the Web folder, temp folder and script paths, so it's easy to pregenerate a configuration script. I chose to generate a separate PRG file rather than generate a pre-compiled EXE file simply because it makes it possible to add additional configuration tasks to this script. Rather than pre-compiling this allows you the option to build a custom build script that can perform much more sophisticated tasks (like create multiple virtuals or add additional users to the ACL list etc.).

A typical generated XXXXX_ServerConfig.prg file looks like this (generated for a project called WebTest2):

*  Webtest2_ServerConfig
***  Function: Templated Installation routine that can configure the
***            Web Server for you from this file.
***            You can modify this script to fit your exact needs
***    Assume: Build this into an EXE file OR
***            add as a command line option to your
***            main application EXE (MyApp.exe "Configure")
***      Pass: lcIISPath  -  IIS Configuration Path (optional)
***                          http://localhost/w3svc/1/root
DO wwUtils    
*** Configurable settings
lcVirtual = "WebTest2"
lcScriptMaps = "wc,wcsx,wt2"
lcVirtualPath = LOWER(FULLPATH("..\Web"))
lcScriptPath = lcVirtualPath + "\bin\wc.dll"
lcTempPath = LOWER(FULLPATH(".\temp"))
lcApplicationPool = "WebConnection"
lcServerMode = "IIS7HANDLER"     && "IIS7" (ISAPI)
   *** Typically this is the root site path
   lcIISPath = "IIS://localhost/w3svc/1/root"
loWebServer = CREATEOBJECT("wwWebServer")
loWebServer.cServerType = UPPER(lcServerMode)
loWebServer.cApplicationPool = lcApplicationPool
   loWebServer.cIISVirtualPath = lcIISPath
WAIT WINDOW NOWAIT "Creating virtual directory " + lcVirtual + "..."
*** Create the virtual directory
IF !loWebServer.CreateVirtual(lcVirtual,lcVirtualPath)
*** Create the Script Maps
lnMaps = ALINES(laMaps,lcScriptMaps,1 + 4,",")
FOR lnx=1 TO lnMaps
    lcMap = laMaps[lnX]
    WAIT WINDOW NOWAIT "Creating Scriptmap " + lcMap + "..."
    llResult = loWebServer.CreateScriptMap(lcMap, lcScriptPath)        
WAIT WINDOW NOWAIT "Setting folder permissions..."
lcAnonymousUserName = ""
loVirtual = GETOBJECT(lcIISPath)
lcAnonymousUserName = loVirtual.AnonymousUserName
loVirtual = .f.
*** Set access on the Web directory
*** IUSR Anonymous Access
IF !EMPTY(lcAnonymousUserName)
   llResult = SetAcl(lcVirtualPath,lcAnonymousUserName,"R",.t.)
*** Set access on the Temp directory
SetAcl(lcTempPath,"NETWORK SERVICE","F",.T.)
WAIT WINDOW Nowait "Configuration completed..."

It's pretty easy to see what's going on in this file, right? The configuration section is just a set of values specified at the top. The code then generates the virtual and scriptmaps and sets permissions.

This is a generated PRG file that gets created in your project root. Because it's just a PRG you can modify it and add additional configuration steps to this file. You can add additional folders to configure, addition accounts to add to the security or even do other configuration tasks like copy files from a network location or download some related dependency into the deploy folder. It's entire open to you.

If you're using older projects you can still use this file and modify it to reflect your file locations explicitly in the configuration section.

Requires that IIS is installed and you have Admin Privileges

To be clear, this functionality configures IIS for your application, but you need to make sure that IIS is installed with the proper components first.

Note: IIS has to be installed and configured properly
The base IIS install has to be up and running on the machine and configured properly before this will work. You can find out more on how to configure IIS on recent versions of Windows.

Note: Admin Privileges requires
In order to configure IIS you have to be a full Administrator so you need to run a compiled EXE or the VFP IDE as an administrator in order for IIS configuration to work.

Running the Setup script

You can run this script from the FoxPro dev environment (make sure the Web Connection libraries are referenced):

DO WebTest2_SetupConfig.prg

remember if you do this inside of VFP's IDE make sure you started it as an Admin.

Embedded into your Server

The generated script however is also embedded into your Web Connection server via a command line parameter.  When you create a new project now your MAIN prg file is generated with a few parameters at the top and a little added code that calls out to the _SetupConfig file when a 'config' command argument is passed to the EXE:

*FUNCTION Webtest2Main
***   Created: 10/09/2015
***  Function: Web Connection Mainline program. Responsible for setting
***            up the Web Connection Server and get it ready to
***            receive requests in file messaging mode.
LPARAMETERS lcAction, lvParm1, lvParm2
*** This is the file based start up code that gets
*** the server form up and running
*** PUBLIC flag allows server to never quit
*** - unless EXIT button code is executed
   *** Load the Web Connection class libraries
       DO ("WCONNECT.APP")
          DO WCONNECT
   IF VARTYPE(lcAction) = "C" AND StartsWith(LOWER(lcAction),"config")
      do WebTest2_ServerConfig.prg with lvParm1
   *** Load the server - wc3DemoServer class below
   goWCServer = CREATE("Webtest2Server")
   IF !goWCServer.lDebugMode   
   IF TYPE("goWCServer")#"O"
      =MessageBox("Unable to load Web Connection Server",48,;
                  "Web Connection Error")
   *** Make the server live - Show puts the server online and in polling mode

When you now compile your Web Connection server into an EXE you'll have an option 'Config' command line switch that you can use to trigger the configuration process:

WebTest2.exe config

This will also trigger the configuration code to run.

There's a second commanline option you can apply providing an IIS meta base path (if you're configuring non-default Web site or virtual as a base):

WebTest2.exe config "IIS://localhost/w3svc/2/root"

which configures a different Web site (site with the ID of 2).

Running the utility should be very quick and only take a few seconds. You should see a couple WAIT WINDOWs flash by and then you're done.

If you want to double check whether things worked check:

  • Whether the virtual was created in IIS Manager
  • Whether the Script Maps were created in IIS Manager
  • Permissions in the Web and Temp folders

It's the little Things!

Configuration of the Web server continues to be a struggle for a lot of Web Connection developers and I hope this feature makes it a little easier to get your server configured in a repeatable way. I think this  script generation accomplishes two things: It makes the process easier to apply and maintain, but it also takes some of the mystery out of the Web server configuration. You now have a piece of code that actually tells you what it's doing and you can control and modify the behavior as you see fit.

With the new project changes it's gotten vastly easier to copy project files by simply 'xcopy deploying' your application. This setup script can then take a standard installation and create all the basic Web server specific configuration settings and create them for you.

The Web Connection 6.0 beta is available now to registered users of Web Connection 5.5 and later, or you can purchase an upgrade from our Web store. During the beta period we have 15% discount on upgrades.

An Updated Web Connection Add-in for Visual Studio 2015

1 comment
Tuesday, August 4, 2015, 8:31:13 PM

Visual Studio 2015 shipped a couple of weeks ago and it brings many great enhancements for Web developers with many improvements in the HTML, CSS and JavaScript editors. It  provides improved IntelliSense support that makes Web Connection and also general HTML development easier. Even if you’re not using the Web Control Framework which is geared directly at using the Visual Studio tools, there are many benefits to using Visual Studio 2015 even if you using Scripts and Templates in Web Connection, especially now that Visual Studio is essentially free for most developers via the fully functional Community Edition that has feature parity with the full Professional version of Visual Studio.

One important update that the Community addition has over the earlier Express additions is that it has full support for plug-ins. In, Visual Studio 2013 you can use the existing Web Connection add-in that was shipped with recent versions of Web Connection.

Unfortunately in Visual Studio 2015 Microsoft broke the way that Add-ins can be installed in Visual Studio. They remove community installed add-in support in the Documents Visual Studio folder, which used to be by simply copying an addin-definition into a folder and then reference the add-in from there. This is no longer support in VS2015 and so the shipped Web Connection add-in no longer works in Visual Studio 2015.

A new VSIX based Visual Studio Add-in

However – I’m happy to announce that I’ve created a new add-in that does work in Visual Studio 2015 (and beyond hopefully). You can download it  from here:

To install the VSIX, simply double click it in Explorer and the installation dialog will come up. Once installed the Web Connection add-in will show up in the Installed Extensions in Visual Studio:


To get there go to Tools | Extensions and Updates.

Building a proper VSIX Extension

The Web Connection add-in was build in the VS2008 timeframe, which is a long time ago and the code that was required to build that plug-in and the hook up code has always been a major nightmare. Hooking up commands to buttons, mapping icons and just getting the buttons bootstrapped took hundreds of lines of nasty, mostly undocumented COM code in the old add-in API and every time I wanted to make a change to the actual plug-in I dreaded having to delve into that code.

The new VSIX model is still complex, but it’s a lot easier to configure the actual hook up and integration pieces to get the add-in bootstrapped. Instead of that nasty COM code there’s now nasty XML configuration rather than code which is a bit easier. It took me only a full day to port my existing plug-in and add a bunch of useful enhancements that simplify use of the plug-in and provide some additional functionality beyond the Web Control Framework. In the process configuration has also gotten simpler and if you’re just after the quick browsing features you may not have to configure anything at all.


The Web Connection Add-in provides these features:

  • View In Browser
  • View FoxPro Source Code
  • Web Connection Web Controls Toolbox Items

Here’s what the add-in looks like when you bring it up on a Web Control Framework page:


The Web Connection menu options are available on the context menu in the text editor as well as on the Tools menu, and they are now context sensitive and only show when they are actually accessible – otherwise the options are hidden. So the Show FoxPro Code options only show when you are in a Web Control Framework page for example and either option only shows when a document with HTML tags is actually open.

View In Browser

Visual Studio has a native View in Browser feature, but unfortunately it’s not supported on custom extensions, like your typical Web Connection requests are. WCSX, WCS, WC and any of your custom scriptmaps are all custom extensions and Visual Studio doesn’t provide the View in Browser feature in this view, even in the HTML or WebForm editors.

So the Web Connection add-in provides this functionality for any page that has HTML tags in the document. For Web Connection users this means you can now browse Web Control Framework pages as well as script and template pages, the latter of which is new and improved.

This sounds like a small feature, but to me having this simple option on the menu really improves my workflow with any script-mapped pages considerably, as opposed to manually switching to the browser and refreshing.

New and Improved

The new version of the plug-in is a bit smarter about configuration for figuring out which browser and server path to use. It gets this information from the active Visual Studio configuration if you don’t configure the Web Connection Configuration settings explicitly. The add-in uses the configured Visual Studio browser that is selected on the standard toolbar, and if you are using a Web Site project (as you will with Web Connection projects) it will automatically discover the project’s Web path that Visual Studio uses to start the project.

Note that Visual Studio uses IIS Express by default to start a project and you need to ensure that IIS is started. You can start IIS Express in Visual Studio by using View In Browser in the Solution Explorer once on an HTML file which launches IIS Express and then leaves it running until Visual Studio is shut down – or you can start the project in debug mode by clicking the run button. This however, shuts down IIS Express after the debug session is done. Better to start with View In Browser of any .htm page.

Another and perhaps cleaner way to do this is by using a full version of IIS instead of IIS Express which gives the project a permanent URL that always works and you then don’t have to worry about whether IIS Express is running or not.

Show FoxPro Code

If you’re using the Web Control Framework one of the things you frequently need to do is switch back and forth between the HTML Markup code and the code behind FoxPro code that drives the actual coded logic for the page. The Show FoxPro Code option lets you do that by opening a new instance of Visual FoxPro with the appropriate PRG file opened. Here’s what you see after you click on the Show in FoxPro menu item.


Show in external editor does the same thing but uses an external editor that you can configure in web.config. I like to use Sublime Text 3 which is an excellent editor with many plug-ins for all sorts of languages. Matt Slay kindly built a FoxPro language extension for Sublime, which works great for editing FoxPro files. Here’s what you see after clicking on the Show FoxPro Code in External Editor.


Sublime is great because it’s extensible and has tons of plug-ins, is very fast, is cross platform (Windows, Mac, Linux using the same plug-ins) and has support for multiple layout windows which makes it easy to edit HTML and code in the same view.

Why use an external editor at all rather than FoxPro? One big reason is that you can keep the editor open with the file loaded unlike FoxPro which requires you to close the file in order for FoxPro to be able to compile the file (FoxPro opens the file exclusively and the FoxPro editor keeps the file open when you edit because it uses virtual editing). The end result is that the FoxPro Editor has a nasty tendency of locking the PRG file that is being edited. With an external editor you can leave the source file open and FoxPro can still compile the file. It’s very convenient for making quick edits and keeping your place between edit sessions.

Sublime is not free, but I’ve grown very fond of it and use it extensively these days for all sorts of editing and my general system editor. You can configure any editor you like though by using the configuration settings in the web.config file.


The Web Connection Add-in uses a number of configuration settings that tell it where to find files. You basically provide the paths to your FoxPro code, to the the Web folder, and the virtual/site URL to start the Web Browser.

Here’s what a typical configuration looks like:

<?xml version="1.0"?> <configuration> <configSections> <section name="webConnectionVisualStudio" type="System.Configuration.NameValueSectionHandler,System, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> </configSections> <webConnectionVisualStudio> <!-- Configuration Settings for the Web Connection Visual Studio Add-in Not used at runtime, only at the design time --> <add key="FoxProjectBasePath" value="c:\WebConnection\Fox\"/> <add key="WebProjectBasePath" value="c:\WebConnection\Web\wconnect60\"/> <add key="WebProjectVirtual" value="http://localhost/wconnect60"/>
<!-- Optional PRG launched when VFP IDE launches --> <add key="IdeOnLoadPrg" value=""/>

<add key="WebBrowser" value="C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" /> <!-- The editor used to edit FoxPro code - blank means FoxPro Editor is used --> <add key="FoxProEditor" value=""/> <add key="FoxProEditorAlternate" value="C:\Program Files\Sublime Text 3\sublime_text.exe"/> </webConnectionVisualStudio>


The good news is that the Add-in doesn’t need any configuration in a Web Site Project (which is typically what you’ll use for Web Connection projects) for the View in Browser functionality, as it can determine the default browser used by Visual Studio (on the Debug button drop down) and the Web path configured for the Web site project. If you provide the values in the configuration those values take precedence, but if you leave the WebProjectVirtual and WebBrowser keys empty View in WebBrowser still works in most cases (if you’re using IIS Express just make sure IIS Express is started first as discussed above).

The Show Fox Code options require that you set the FoxProProjectBasePath and WebProjectBasePath keys and – if you want to use an alternate editor – the path to the alternate editor exe. Note that Sublime is set up by default – if you don’t want it you can blank out the value.

When Web Connection creates a new project for you, it automatically creates the relevant configuration information for you, so on new projects the configuration settings you specified during the setup process are automatically applied above.

Web Connection Web Controls Toolbox Items

The VSIX now also contains the Web Connection Web Controls which are required in order to get the Web Connection controls onto the Visual Studio Toolbox. In the past there was a registration service that allowed registration of components from a special location but again that feature has been discontinued in Visual Studio 2013 and later. The VSIX now contains the controls embedded in the Add-in DLL and they are properly and quickly installed as part of the VSIX registration.


The good news is that this is much more reliable than the past mechanism and much quicker. On the downside the location of the DLL is a deep path inside of your user settings so it’s not so obvious where the file is loaded from.

Just as a reminder – the WebConnection-addin.dll and also the old WebConnectionControls.dll, although added to your project are not used at runtime. These DLLs merely provide the placeholder controls for design time properties and configuration.

Uninstall – Reinstall

Because VSIX are essentially installers specific for Visual Studio you can easily uninstall and reinstall everything. If something breaks or the connection breaks its very quick to uninstall the VSIX and simply reinstall it.

To uninstall go to Tools | Extensions and Updates, find the Web Connection Add-in and click the Uninstall button. The add-in and toolbox controls will be removed. To reinstall simply download or find the VSIX installation file in your Web Connection installation.

The VSIX will be located in this location in Web Connection 6.0 (note this is not available yet since 6.0 hasn’t shipped yet):


Or you will also be able to download it from:

Web Connection Visual Studio Add-in for Visual Studio 2015

Going forward

For Web Connection 6.0 the configuration Wizard will continue to install the old style Add-in for Visual Studio 2013-2010, and the new VSIX based wizard for Visual Studio 2015 and later. If you’re using Visual Studio primarily for Web Connection work I highly recommend you look at using Visual Studio 2015 Community because it provides the most functionality that’s useful for Web development.

The VSIX works and I’ve been using it for the last week in development of a bunch of the new stuff for Web Connection 6.0 and creation of some of the demos for the Web Connection Training at Southwest Fox in October and for my sessions there.

But it is still a beta, so there may be a few rough spots. If you find any of them please let me know on the message board in the Web Connection section.

Clicks not working in Internet Explorer Automation from FoxPro

1 comment
Sunday, July 19, 2015, 10:00:45 PM

A few days ago, somebody posted a question on our message board mentioning that when using Internet Explorer Automation (using COM and InternetExplorer.Application) fails to automate click events in recent versions of Internet Explorer. A quick check with my own code confirmed that indeed clicks are not properly triggering when running code like the following:

o = CREATEOBJECT('InternetExplorer.Application') o.visible = .t. o.Navigate('http://west-wind.com/wconnect/webcontrols/ControlBasics.wcsx') DO WHILE o.ReadyState != 4 WAIT WINDOW "" TIMEOUT .1 ENDDO loWindow = o.document.ParentWindow ? loWindow *loWindow.execScript([alert('hello')]) oLinks = o.Document.getElementsByTagName('a') oLink = oLinks.item(0) ? oLink.href
oLink.click() && doesn’t work o.document.getElementById('txtName').value = 'Rick' oButton = o.document.getElementById('btnSubmit') ? oButton oButton.Click() &&doesn’t work

Note the link and button clicks – when this code is run with Internet Explorer 10 or later the page navigates but the clicks are never registered in the control. Now this used to work just fine in IE 9 and older, but something has clearly changed.

IE 10 – DOM Compliance comes with Changes

Internet Explorer 10 was the first version of IE that supports the standard W3C DOM model, which is different than IE’s older custom DOM implementation. If you’re working with IE COM Automation you will find there are number of small issues that have changed and that can cause major issues in applications. In Html Help Builder which extensively uses IE automation to provide HTML and Markdown editors, I ran into major issues at the time when IE was updated. There both actual DOM changes to deal with the w3C compliance, as well as some behavior changes in the actual COM interface to accessing the DOM from external applications.

The issue in this case is the latter. The problem is that IE is exposing DOM elements natively which means the DOM elements are exposed using the native JavaScript objects as COM objects. Specifically JavaScript always have at least one parameter which is the arguments array and that’s reflected in the dynamic COM interface.

JavaScript Method Calls Require a Parameter

The workaround for this is very simple – instead of calling


you can call


Passing the single parameter matches the COM signature and that makes it all work. Thanks to Tore Bleken who reminded me of this issue that I’ve run into myself countless times before in a few other scenarios.

So the updated code is:

o = CREATEOBJECT('InternetExplorer.Application')
o.visible = .t.
DO WHILE o.ReadyState != 4
* Target object has no id so navigate DOM to get object reference
oLinks = o.Document.getElementsByTagName('a')
oLink = oLinks.item(0)
* oLine.Click(.F.) 
o.document.getElementById('txtName').value = 'Rick'
oButton = o.document.getElementById('btnSubmit')
? oButton

The hardest part about this is to remember that sometimes this is required other times it is not – it depends on the particular implementation of the element you’re dealing with. In general if you are dealing with an actual element of the DOM this rule applies. I’ve also run into this with global functions called from FoxPro.

The rule is this: Whenever you call into the actual HTML DOM’s native interface, you need to do this. For example, if you define public functions and call them from FoxPro (o.document.parentWindow.myfunction(.F.)) you also need to ensure at least one parameter is passed. As a side note, functions have to be all lower case in order for FoxPro to be able to call them, due to FoxPro forcing COM calls to lower case and the functions being case sensitive in JavaScript. 

These are silly issues that if FoxPro were still supported would probably be fairly easy to fix. Alas, since it’s done, we’ll have to live with these oddball COM behaviors. Luckily there are reasonably easy solutions to work around some of the issues like the simple parameter trick above.

Drive Mapping in Web Applications

No comments
Sunday, June 28, 2015, 6:31:30 PM

Over the last couple of weeks a number of questions came up in regards to getting access to network drives from within your Web Connection applications. Drive mapping errors can be really insidious because you often don't know what's actually causing the problem as the problems usually occur right during application startup where it's difficult to debug. Additionally it's not obvious that the error is drive mapping vs. some other issue like permissions or some other failure in the application.

This may seem trivial but when you are developing applications and when you are deploying applications often provides a very different runtime environment, which can cause problems when it comes to mapping drives. The reason is that Web applications usually run under a system context, rather than a user context.

IIS User Context

So in IIS a Web Connection COM Server or IIS launched standalone EXE file are typically running inside of the security context of an Application Pool. The security context is determined by the Application Pool Identity which is set in the Application Pool's Advanced settings in the IIS Management Console:


We recommend that you set the Application Pool identity and if you're running the server as a COM Server leave the DCOM settings as the Launching user which inherits these settings into the COM server. This way configuration is left in a single place as the security flows from IIS into the COM object.

Regardless of whether you use a system account like SYSTEM or Network Service, or a full local or domain account, the accounts loaded by default do not have a user profile (although you can enabled that but I don't recommend it) which means standard mechanisms of loading common startup and system settings are not applied. One thing that this means is that drive mappings are never persisted across multiple logins as that's part of a user profile.

So while you may have mapped a network drive with your user account that network drive – even if persistently mapped – will not be visible by a different account, or even the same account when loaded under IIS. This means you need to make sure that either your application, or the Windows subsystem that the account runs under loads up any drive mappings.

Mapping Drives Or UNC Paths?

There are a couple of ways you can access network drives: You can use mapped drives where a network share and path are mapped to a drive letter, or you can use the raw UNC paths to access the resources directly.

UNC Paths

UNC paths appear to be simpler at a glance because they are a direct connection to a remote resource. A UNC path looks like this:


and allows you to directly reference a folder or file using a somewhat verbose syntax.

But there are a few problems with UNC paths. First and foremost permissions are often an issue – if you are not referencing the remote path with the same credentials that you are logged on under on the remote machine, a UNC path won't work as you can't easily attach authentication with your file access request.

The other issue is that performance often is not very good. There are mixed reports on this – some people have found that UNC paths work quickly and as fast as mapped drives, but in my experience over slower connections UNC paths tend to be drastically slower in connecting to remote resources. Actual tranfer speeds tend to be find, but connection speeds can often be slow and it appears the the name resolutions are not cached resulting in slow connection delays.

Mappded Drives

Mapped drives let you map a drive letter to remote network resources. Typically you map a drive letter to a share on  a remote server. In Windows this is done using the NET USE command:

net use i:  \\server\cdrive "password" /User:"username" /persistent:yes

This maps a i: drive to a remote resource.

As mentioned the tricky part with Drive mappings is to know where they are visible. When net use is issued it's valid only for that specific user's context. This is why if you map a drive to your desktop user while logged is not going to be visible to the SYSTEM account or even your own account when running in the non-interactive console.

What this means is that if you want to map drives you have to map the drives from within the correct context. For a Web application running in IIS this means setting up the mapping either as part of the startup code of the application or as part of Windows startup scripts that are fired when a Windows session is started.

Mapping from within an Application

Personally I prefer to handle drive mapping as part of the application to keep the drive dependecies configured in the same place as many other configuration settings. The key is to do the configuration before you access any resources that require these drives.

Using Web Connection it's easy to do this in the OnInit() or OnLoad() of the server code:

*** Check whether it worked by looking at some resource
    this.Trace("i: drive couldn't be mapped")    

MapNetworkDrive() is a helper function from wwApi.prg that essentially shells out to Windows and calls the net use command. It runs in the context of your application so assuming you have rights to map a drive you should be able to map the drive here. It's a good idea to check for some known resource afterwards to see if the drive mapping worked since the function itself gives no feedback. If it fails call the Trace() function to log the info into wwTraceLog.txt so you can see the failure if it occurs and potentially stop loading the application by erroring out with an ERROR() call.

The function is pretty simple:

*  MapNetworkDrive
***  Function: Maps a network drive
***    Assume:
***      Pass: lcDrive     - i:
***            lcSharePath - UNC path to map \\server\share
***            lcUsername  - user name (if empty uses current creds)
***            lcPassword  - password
***    Return: .T. if the drive exists after mapping
FUNCTION MapNetworkDrive(lcDrive, lcSharePath, lcUsername, lcPassword)
IF RIGHT(lcDrive,1) != ":"
   lcDrive = lcDrive + ":"
lcRun = [net use ] + lcDrive + [ "] + lcSharePath + [" ]
IF !EMPTY(lcUsername)
  lcUserName = ["]  + lcPassword + [" /USER:"] + lcUsername + ["]
  lcUserName = ""
lcUsername = lcUserName + " /persistent:yes"
lcRun = lcRun + lcUsername
RUN &lcRun 
*** Check to see if the folder exists now

Using System Policy Startup Scripts

One problem with the application mapping is that the drive is mapped everytime the application starts. While it doesn't hurt to remap drives, there is some overhead in this process as it's slow and if you have multiple instances firing up at the same time there may be some interference causing the map to fail.

Another potentially safer way is to use System Policy to create a startup script that runs a batch file that creates the maps. These scritpts are fired once per Windows session so there's less overhead and no potential for multiple applications trying to map drives simultaneously.

To do this:

  • Open Edit Group Policy
  • Go to Computer Configuration/Windows Settings/Startup
  • Point at a Batch file that includes the net use commands to map your drives



Drive mapping can be a major headache in Web applications if you are not careful and plan ahead for the custom execution environment in which system hosted applications run. Essentially make sure you explicitly map your drives either as part of the application's startup or as part of the console system startup whenever a Windows session is started. I hope this short post clarifies some of the issues you might have to deal with in the context of Web Connection applications.

Visual FoxPro and Multi-Threading

1 comment
Thursday, June 18, 2015, 3:57:22 PM

A few days ago somebody asked a question on the Universal Thread on whether it’s possible to run a Visual FoxPro COM component as a free threaded component. Questions like this come up frequently, because there is a general sense of confusion on how Visual FoxPro’s multi-threading actually works, so lets break this down in very simple terms.

Visual FoxPro is not Multi-threaded

The first thing to understand is that Visual FoxPro is not actually multi-threaded in the sense that say a native C++ component is multi-threaded.

Visual FoxPro is based on a very old and mostly single threaded code base. Your FoxPro code that runs as part of your application always runs on a single thread. This is true whether you are running a standalone EXE, inside of the VFP IDE or inside of an MTDLL. Behind the scenes there are a few things that Visual FoxPro does that are truly multi-threaded, but even those operations converge back onto a single thread before they return to your user executed code in your application. For example, some queries and query optimization that use Rushmore are executed on multiple threads as do some ActiveX interactions that involve events. The FoxPro IDE also does a few things interactively in the background. You can also call from FoxPro into a native DLL or a .NET Component and start new threads outside of the actual Visual FoxPro runtime environment. But when we’re talking about the actual code executing in your mainline programs – they are single threaded. 1 instance of your executing FoxPro code == 1 thread essentially.

Keep this in mind – Visual FoxPro code is single threaded and it can’t and won’t natively branch off to new threads. Further FoxPro code assumes pretty much it’s the only thing running on the machine so it’s not a particularly good citizen when it comes to giving up processor cycles for other things running on the same machine. This means Visual FoxPro will often hog the CPU compared to other more multithread aware applications do. Essentially VFP will only yield when the OS forces it to yield.

Visual FoxPro supports running inside of some multi-threaded applications by way of a mechanism called STA – Single Threaded Apartment threading. VFP9T is not thread safe internally. It is thread safe only when running in STA mode called from a multi-threaded COM client that supports STA threading.

So what is a Multi-Threaded VFP COM DLL?

If you create VFP component with BUILD DLL MTDLL you are creating an STA DLL. STA stands for Single Threaded Apartment and it stands for a special COM threading model that essentially isolates the DLL in it’s own separate operating container. STA components are meant for components that essentially are otherwise not thread-safe. A thread safe component is a component that can use a single instance to get instantiated from multiple threads and run without corrupting memory. Essentially this means that a free threaded component needs to be able to have no shared state or if it does have shared state that shared state has to be isolated and blocked behind sequential access (Critical Sections or Mutexes etc.) to essentially serialize the access to any shared state.

Visual FoxPro DLL – MTDLL or otherwise  - do not qualify as thread safe. FoxPro has tons of shared state internally and if that shared state were to be accessed simultaneously from multiple threads – BOOOM! So FoxPro is not a true multi-threaded component.

This answers the original question: Visual FoxPro cannot run as a Free Threaded component directly as a DLL.

However, there are several ways that you can run VFP COM components in free threaded mode but not directly as a DLL. I’ll talk about that a bit later on.

STA For Multi-Threading

Ok so Free Threading is out for a DLL. But you can build MTDLL components which are STA components that can be used in multi-threaded environments that support STA components. For example, you can use ASP or ASP.NET pages to call FoxPro COM components and these components will effectively be able to run multiple simultaneous requests side by side.

This works by way of COM’s Apartment Threading model which essentially creates isolated apartments and caches fully self contained instances of the COM DLL in a separate storage space – the apartment. What this means is that when you create a new VFP COM component COM creates a new COM apartment and loads your DLL and the VFP runtime DLLs into it. COM leaves that apartment up and running. When the next request comes in it activates the same apartment with the already running runtimes inside of it, reattaches the component and thread state and then executes the request in the same apartment. If multiple requests come in simultaneously while all other apartments are busy a new one is created, so effectively you end up with multiple copies of the Visual FoxPro runtime running simultaneously, side by side. As requests come in they are routed to each of these existing apartments and the COM scheduler decides how long the apartments persist.

All of this happens as part of the COM runtime with some logic as part of the STA component in the VFP runtime that makes it possible to launch your VFP COM component and link the VFP runtime which is doing the heavy lifting. Behind the scenes each apartment then has its own thread local storage address space, which can hold what otherwise would be global shared data. A FoxPro MTDLL essentially maps your VFP component to a separate VFP9T runtime that stores all of its shared stated in thread local storage, rather than on the global shared memory. This is why there are a few things that don’t properly work in MTDLL components – the memory usage is different and in fact a bit less inefficient as all shared state and buffers are stored in TLS.

What all this means is that in STA mode when simultaneous requests come in and process at the same time, the VFP runtime effectively runs multiple instances of the runtime side by side and thus ensures that there’s no memory corruption between what would otherwise be shared data.

If you want to understand how this works, try setting up a COM component that runs a lengthy query. Then first create a DLL Component (not MTDLL) then load this into an ASP or ASP.NET application and hit the page with multiple browsers (or a tool like West Wind Web Surge for load testing). After a few hits you’ll like see errors popping up in IIS finding that your COM component crashed. This is due to the memory corruption that occurs when your are not using the STA optimized DLL compilation.

Then recompile in MTDLL mode and try the same exercise again – you’ll find that now you don’t get a crash because the shared state is protected by multiple instances of VFP9T.DLL. If you want to look at this even deeper fire up Process Explorer and open the host process (w3wp.exe for the IIS Application Pool), then drill into the properties and loaded DLL dependencies. You’ll see multiple instances of your DLL and the VFP9T.dll and your application DLL loaded (assuming you’ve loaded the app hard enough to require multiple instances to run simultaneously).

There's no reason not to use MTDLL COM components using STA if you can. STA is effective in isolating instances and that's as efficient as you are going to get. For do nothing requests it's possible to easily get around 2000+ req/sec on mid-range quad core i7 system. However, things get A LOT slower quickly as soon as you add data access and returning object data to the COM client, so the actual raw throughput you can achieve with STA is far outweighed by the CPU overhead of doing anything actual useful inside of your FoxPro code. Running a query and returning a collection of objects of about 80 records for example drops the results down to about 8 req/sec. Call overhead is a nearly useless statistic when it comes to VFP COM objects, so even if free threading were possible and even if it were 10x faster than STA threading, it would have next no effect at all on throughput for most requests that take more than a few milliseconds. STA is the most effective way to get safe and efficient performance.

Where STA and MTDLLs fall short is error recovery. If an STA component fails with a C5 error or anything that hangs a component, the component will remain loaded but won’t respond to further requests. The COM STA scheduler still thinks that component is active and running which results in potentially confusing intermittent errors where say every 3rd request results in an error. There’s no good way to recover from that, short of shutting down the host process (IIS application pool). There’s no way to shut down COM DLLs for maintenance tasks either – short of shutting down the Host process. So if you need to run tasks like reindexing or packing of your data you need to really think ahead about how to do this as you have to kill all instances and then ensure only a single instance (ie. 1 not overeager user) is doing the administration tasks in order not to trigger multiple instances that have files open.

Bottom Line: STA components provide a good simulation of multi-threading and this is the best way for getting multi-threaded components into a multi-threaded host like IIS typically – assuming the host supports STA threading and provided you don’t have very frequent admin tasks that require exclusive access you need to run against the server.

STA Support – not always available

STA is a good option for multi-threaded FoxPro code, but it’s not always available. In fact, more and more STA support is going away because the era of legacy COM components like FoxPro and VB6 has pretty much disappeared for high level development and is relegated now to system components which typically can support free threading. For example, the only ASP.NET technology that officially supports STA components is ASP.NET WebForms. ASP.NET MVC, ASMX Web Services, WCF, Web API and vNext all don’t have native support for STA built in.

There are ways to hack around this and I have a blog post that covers how to get STA components into various technologies:

Creating STA COM compatible ASP.NET Applications

which lets you use FoxPro components in different .NET technologies.

Free Threading – You Can Do It, but…

Earlier I said that it’s not possible to use VFP DLL COM components directly to run in Free Threaded environments. However, there are a couple of options available if you step outside of the box and use some component technology:

  • Use a COM+ Component
  • Use an EXE Server

Both of these technologies essentially create fully self contained objects. COM+ provides a host wrapper that can run either in process or out of process, while an EXE server is on its own a fully self contained instance. Both technologies have one big drawback: They are slow to create and dispose of instances.


COM+ is a wrapper technology that you can access on Windows using the Component Services plugin (make sure you run the 32 bit version of it for Fox components). COM+ allows you to register a VFP MTDLL COM component which essentially creates a registry redirect to instantiate your COM component through the COM+ runtime. The COM+ runtime creates a host container that essentially provides the STA Apartment that VFP expects and every access of the COM component is then routed through this HOST container. The actual ClassID points at COM+ with special keys that actually point at the ClassID for your component. The container loads itself then loads and executes the actual COM component inside of it.

COM+ supports both In Process and Out of Process activation modes but even the in-process mode tends to be fairly slow adding a lot of overhead for each call made to the container.

COM+ is also a bit of a pain to work with for debugging and updating of components. In order to update a COM component you have to unload the COM+ container and if the interface of the COM object changes you have to reregister the component in the COM+ manager which is fairly painful during development.

EXE Server

You can also use an EXE server to run as a Free Threaded component. Because an EXE server is effectively an out of process component there’s no overlap in shared memory or buffers and so launching an EXE server is an option for executing in a free threaded environment. The limitation with this is that the calling server has to support IDispatch invokation of COM objects.

Like COM+ loading up an EXE component is slow because each time it’s instantiated a new instance of the VFP runtime is required. While the runtime disk images cache it’s still pretty slow to load up and shutdown full processes. However, if performance is not critical and you have to support free-threaded environments this is one of the easiest ways to make it work!

There other alternatives in how to run EXE servers – like running a pool manager of instances so that instances are loaded and then cached rather than having to be restarted each time. I use this approach in West Wind Web Connection, and by removing the overhead of loading a VFP instance each time and keeping VFP’s running state active, raw request throughput performance actually rivals that of DLL server with this approach. However, this sort of thing only works if you can control actual activation as part of the application you are building (as I do in Web Connection).


Multi-threading in FoxPro is a tenous thing. It’s possible due to some very creative hackery that was done on the VFP runtime to effectively make a single threaded application run in a multi-threaded environment. STA threading is a reasonable solution to make FoxPro work in those environments that support it. Unfortunately STA support is getting less and less implemented by new technologies so it’s harder to find places where STA components will actually work anymore. When STA doesn’t work there are alternatives: Using COM+ or if performance doesn’t matter much EXE servers can be used as well and both of these technologies work in fully free threaded environments at the cost of some performance.

Today if you’re starting with any sort of new project, the recommendation for multi-threaded applications is to look elsewhere than FoxPro. Most other environments have native support for multi-threading so it’s much easier to integrate into multi-threaded environments.

A Preview of Features for Web Connection 6.0

Sunday, June 7, 2015, 2:21:07 PM

I've set some time aside to start working new functionality for Web Connection 6.0. I've been really on the fence on whether I wanted to go down this path due to the ever declining FoxPro market. However, there is still a fairly large contingency of developers out there using Web Connection and even a fair amount of users that are coming into Web Connection as new users. The existing version works fine and has most of the features that most developers need, but the truth is that a lot of the samples, documentation and content have gotten really dated. We do build Web apps differently these days and while Web Connection still works fine for all the development scenarios with the updates I've provided over the years, the demos and documentation really don't reflect that very well any more.

So for Web Connection 6.0 I've been planning a few things to make using Web Connection a bit easier. I still work with a lot of customers building Web Connection applications and there are a few things that I think will really improve the usability of the product. While I plan on adding some new features and revamp the default Wizards and configuration, I also want to ensure that there's minimal to no impact on existing applications. While I expect the new project experience and layout to be different, the core engine and functionality won't change much so that existing applications will continue to run just fine on the new version.

Planned Feature Enhancements

There are 4 key areas that I'm planning on addressing with Web Connection 6:

  • Simplified Project Configuration and Management (and documentation)
  • Improved Templating and Scripting
  • Improved support for Mobile Web development (REST Services and Mobile friendly default templates)
  • Overhaul the default templates for new projects and the samples

These are only 4 bullets, but they are actually quite significant in terms of work that needs to be done, especially the latter two, which are mostly redoing the existing examples, as well as reworking the documentation to match. Some of the work fo the third item has already been done in Version 5.70 and forward, but there are additional improvements to be done and integrated.

Let's look at each of these.

Improved Project Setup

One thing that I've heard over the years is that it's a pain to manage Web Connection projects. In the past I've built the library in such a way to make it as easy as possible to create a project and ensure that the code 'just runs' out of the box. The result was that Web Connection generates new projects into the install folder, along with any other projects. If you manage multiple projects this gets messy – it's hard to separate your actual project code from other projects and from the framework code.

So last week I built a new New Project Wizard that creates projects in a much more self contained fashion. When you create a new project with Web Connection now you create a new Project folder that contains both the Code, and Web Folders underneath the new project root. The code folder contains only your own code and Web Connection is referenced via SET PATH. The new Wizard generates the necessary pathing and a shortcut to make it easy to get set up properly so that your project just builds. This was tricky to get right, and in fact has been the reason I did not want to actually implement this in the past.

The New Project Wizard has been whittled down to 2 simple  pages now:



Gone are are most of the choices for file locations and IIS configurations beyond picking the site so this is much less intimidating for new users.

This is possible because with the new project structure we can create new projects with a known location and all of the paths for the Web site and configuration file locations can be determined based on the project path.

The default configuration settings are now also using relative paths for most paths, so that configurations are much more portable. In many cases you can just push up your project folder to a server and other than setting up the IIS ApplicationPool and Virtual directory the application should just work.

Here's what the project layout looks like:


There's a project root folder (DevDemo) and deploy and web folders. Deploy is the 'code' or 'binary' (or both) folder for your project – this is where your application starts from. The web folder holds all the Web content and that's what's mapped in IIS to the Web site or Virtual directory.

The deploy folder contains your server and process classes, the project file, the compiled EXE and INI configuration file. The project wizard also generates a config.fpw file that has paths pointing back to the Web Connection install folder (or whereever Console.exe was run from) and a desktop shortcut for the project was also created that points at that web.config file.There's also a SetPaths.prg which does the same thing if you just change path to the folder as a lot of you do.

Also notice that the temp path has now moved into the Deploy folder by default. This is a known application in this project – for the executable it's just ".\temp", for the Web app it's "~/../deploy/temp" – a relative location. Again if you move this app to the server with this same folder structure, things just work without reconfiguring paths in your config files.

Note that this is just a new default setup. Nothing has changed in the way you configure Web Connection itself meaning that if you still want to put your Web folder into inetpub/wwwroot you can do that as well. You can move the code folder, web folder – anything at all. You just have to manually adjust the configuration settings in IIS and your configuration files to match.

Better late than never right? :-)

Templating and Scripting Improvements

When I released Web Connection 5.50 a few years back, that release paid lip service to the fact that the Web Control framework that was introduced in Web Connection 5.0 didn't really take off. While there are a number of you that are using that framework heavily most developers are continuing to use the templating and scripting features. 5.50 introduced the wwHtmlHelpers set of helpers that provide a lot of the functionality that many of the Web Controls provide for the simpler scripting and templating applications.

For Version 6.0 I plan on adding a couple of important features to templates and scripts:

  • Integrated Partial Rendering
  • Support for Layout/Master Pages

If you've used any sort of server side MVC (Model/View/Controller) framework before you've probably noticed that Web Connection's ProcessClass/Script/Template mechanism essentially was an early MVC implementation. However, most modern MVC implementations support nested views in a variety of ways. Having an easy way to pull in nested content or Partials makes life a lot easier. Web Connection has always had support for this but the syntax for it was nasty and it's very difficult to discover.

Likewise there's the concept of 'Layout' pages that are sort of a master view container. Most applications have a 'base' layout into which other content is added. So you have a Layout page into which you render you actual page content. The page content then in turn can have Partials to render additional reusable components into the page.

In the last couple of days I implemented both Partials and Layout pages for both the template and script engines in Web Connection.

Here's how this will work.


Web Connection templates are typically rendered with Response.ExpandTemplate() or by using .wc pages through the default script map handler. Templates are basically evaluation only templates that are loaded and executed by parsing the document and replacing FoxPro code expressions and script blocks. Templates only support self-contained code blocks, but don't support structured code that mixes HTML and code.

The new Partial and Layout features fit well with templates however since these operations are basically just expressions calling out to other templates. Here's an example of a set of templates that can interact with each other:

<% Layout="~/LayoutPage.wc" %>

<h3>This is my rendered content</h3>
    Time is: <%= DateTime() %>
</p> <%= RenderPartial("~/PartialPage2.wc") %> <%= RenderPartial("~/PartialPage2.wc") %> </div>

RenderPartial() delegates out to another script page, which simply contains more script content just like this page. The partial page in turn can contain additionally nested content.

The syntax is:

<%= RenderPartial("~/PartialPage.wc") %>

and has to be used with this exact spacing and syntax in order to work properly. The ~/ denotes the root of the Site/Virtual so this expects a PartialPage.wc to exist at the root of the site. The ~\ syntax is required!

This content on this page is meant to be rendered into a Layout page. You'll notice that this page is really just an HTML fragment, not a full document. There's no <html> or <body> tag – that's meant to be provided by the layout page specified by this syntax:

<% Layout="~/LayoutPage.wc" %>

Again the ~/ is required and points at the root of the site/virtual.

The actual layout page in turn looks like this:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
    <h1>This is my LAYOUT PAGE Header</h1>

    <%= RenderContent() %>

    <hr />
    LAYOUT PAGE Footer

This page has the HTML header and it pulls in the  content from the previous page we looked at via:

<%= RenderContent() %>

Layout pages are great for creating a basic shell of a layout – that includes the HTML header and other content that ends up on just about every page of your application. It's great for pulling in css and scripts and base layout for pages.


Web Connection scripts are typically called using Response.ExpandScript() or by using scripts on disk with a .wcs extension. Scripts are parsed into full PRG based programs that are executed as compiled FoxPro code. Scripts support most of the same features that templates support, but additionally can also run structured statements that mix HTML and code. Because scripts are compiled FoxPro, a compilation step is required by Web Connection and it's not quite as straight forward updating script files if multiple instances are running and have the compiled FXP files locked.

The new Partial and Layout rendering in scripts uses the same syntax I showed with templates.

<% Layout="/wconnect/weblog/LayoutPage.wcs" %> 

<h3>This is my CONTENT PAGE</h3>
    Time is: <%= DateTime() %>
<% for x = 1 to 10 %>
    <h3>Partial Content below</h3> 
    <hr />
    <%= RenderPartial("~/PartialPage.wcs") %> 
<% endfor %> 

In this example a partial is rendered 10 times in a loop which demonstrates the structured mixing of code and HTML that you can't do in templates.

As in the template example, the layout page has to include a call to RenderContent() to force the content page that referenced the Layout page to be rendered.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
    <h1>This is my LAYOUT PAGE Header</h1>

    <%= RenderContent() %>

    <hr />
    LAYOUT PAGE Footer


Layout Sections

You can also create layout sections in the master that are 'filled' from the content page. A typical example for this is when you want to add scripts or css stylesheets to a page from the Content page. Or if you have some other area of the page that you want to fill with content from the content page. Essentially this let's you embed content outside of the content rendering area from the content page.

Start by setting up your layout page:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
    <%= RenderSection("headers") %>
    <h1>HEADER FROM LAYOUT PAGE</h1>    
    <%= RenderContent() %>    

    <hr />    
     <%= RenderSection("scripts") %>     

This designates two sections for headers and scripts. The string is a label of your choice – you can have as many section as you like.

In the content page you can then create sections that will essentially fill these sections in the layout page:

<% Layout="/wconnect/weblog/LayoutPage.wcs" %> 

<% section="headers" %>
     <title>My Content Page</title>
<link href="css/MyPage.css" rel
="stylesheet" /> <% endsection %> <div> <h3>This is my CONTENT PAGE</h3> <p> Time is: <%= DateTime() %> </p> <h3>Partial Content below (10x)</h3> <hr /> <% for x = 1 to 10 %> <%= RenderPartial("~/PartialPage.wcs") %> <% endfor %> <hr /> </div> <% section="scripts" %> <script src="bower_components/lodash/lodash.js"></script> <% endsection %>

You can also use script expressions (but not code blocks) inside of sections. Sections can fill parts of a document such as a user status widget perhaps that displays user login information. Lots of options here.

At this point sections only work in scripts, not templates. It took some extremely ugly recursive and generator style code to make this work, and I'm not sure at this time whether it'll be possible to make this work with the current template engine. But then again, if you are doing stuff complex enough to require sections and layout pages you probably should be using scripts anyway.

Finicky Tags

In order to keep the parsing overhead to a minimum when dealing with these new 'directive' tags, the tag names have to match exactly – including spacing inside of the tags. This allows quick searches for these tags in the page rather than traversing the document to find them which dramatically speeds up the performance. All of the new tags are not directly executed by FoxPro – the various expressions were chosen for easy to remember and logical names in order to make it easy to use them in the page. They are translated into actual executable script code when the page is parsed – they expand into something different. Sections in particular turn into some real ugly code with literal strings, but that's the price for this level of complexity. As before you'll be able to see the underlying code that makes the pages actually run in the codebehind PRG files that are generated.

Partials and Layout should make building complex apps quite a little less repetitive as you can do away with a lot of boiler plate code and simply put it into a Layout page.

Although you could do at least partials before, these new features standardize the behavior significantly and make it more transparent.

Improved support for REST/API Services and Mobile Web

Mobile applications are becoming more important by the day as more people are relying on mobile devices of various sizes to access their data and applications. Building mobile applications is quite different than building 'classic' Web applications that they need simpler user interfaces and handle display more effectively. As a result we've seen a big move towards client centric applications that use HTML/CSS and JavaScript to handle the user interface, using the server to serve static resources and server based services to feed data.

In the last releases of Web Connection (5.65 and later) there have been steady improvements to provide for that REST/API Service layer via the new wwRestProcess class that can facilitate consuming and serving of JSON data from your FoxPro business objects. There are a number of additional improvements that can be made here in terms of performance and simplifying the interface even more. This will be mostly an incremental change but it's an important piece in the overall Web Connection 6 package.

The other piece of that is the client side. This isn't directly related to Web Connection other than providing improved examples that demonstrate how to build these type of applications that use an all browser based client side user interface and that works well both on mobile devices as well as full sized applications. The Music Store sample was the start of that. Again some of the functionality it there already but it's part of the packaging for 6.0

Updated Samples And Default Templates

As mentioned at the beginning of this post, the main reason to consider all of this work is to make it easier to get started with and set up new projects that are ready to go. This is for my own work as I still work with a host of customers both to showcase specific functionality as well as creating actual new projects.

The goal is to build new templates that default to using Bootstrap for user interface layout. I plan on providing a default theme based on bootstrap that provides a basic customized bootstrap setup that can be easily customized (or go back to stock bootstrap if you choose).

This seems like a minor thing but there are a number issues related to this. All of the samples in Web Connection are based on old custom CSS that don't use any special CSS or other dependencies. While that worked in the old days, it's now to really look like crap and dated. This involves going through all the existing samples, all the existing admin features etc. and starter templates. This will take some time unfortunately and most of this is tedious boring work :-)

Along the same lines a lot of the simple feature samples in Web Connection need to be updated – some just need a visual refresh, others need a complete logic rebuild to work the way you'd do things today. Some of these samples are nearly 20 years old, so yes there's some room for improvement I suppose.

Likewise the Web Control Framework functionality uses all the old styling, so I'm not sure that I will be updating these controls to use the new layouts. THere's too much baggage there to make that work – that might be a 6.1 feature.

Dropping Visual FoxPro 8 Support

One thing that has made maintenance of Web Connection more problematic over the years is support for Visual FoxPro 8. With version 6 I will be dropping support for VFP8. There's no good reason for anybody to be running VFP8 these days, given that VFP8 and VFP9 have very, very little in the way of feature differences. This was not the case for earlier versions of 7 and 6, but those haven't been supported for some time.

I know some of you will howl at this (I still get frequent requests for 'does this tool work with VFP 6?'). There's no excuse to be running anything but VFP 9 if you are running FoxPro applications. Maintaining VFP 8 support has been an issue as it requires extra testing, extra files to distribute to keep in sync and having to remember what we can and can't use in the latest versions of our tools.

Other Odds and Ends

There are a few other things that need attention as well. The authentication features in Web Connection are reasonably functional, but hooking them up currently is both badly documented and could be easier by providing a few additional hooks. It also would be nice to set up a default login page template that is easy to customize as part of a new application – this way you can simply modify a Web page to get the look that you want without having to override a bunch of process class methods as you have to do now to essentially do the same thing.

The various Wizards also will need some updating to reflect the changes to the new templates. For the most part the updates will be minor except for the new Process Wizard which will also need a similar overhaul as the new project Wizard does.

What else?

If you're using Web Connection, what's on your wishlist? What features do you want to see that I've missed that are within the realm of the core framework? I know there are tons of request to build this or that vertical functionality but I can tell that's not going to happen from me – that's what you guys can build ontop of Web Connection :-) But if you have core framework features or utilities that you think would make your life easier, I'd love to hear about it.

Single File Image Uploads with plUpload and Web Connection

No comments
Monday, April 20, 2015, 10:31:00 AM

I get a lot of questions about uploading files in regards to Web Connection. Uploading is a thorny problem especially if you need to upload multiple files or if you want to use asynchronous uploads using AJAX rather than submitting files through standard <input type=”file”> elements which suffer from many problems. Using asynchronous upload components allow getting around the 16 meg string limit in FoxPro and Web Connection and they allow you to do the uploads in the background while you can display progress information and keep your UI otherwise active.

Web Connection has had support for plUpload for some time now. There’s a plUpload server handler that allows you to use the plUpload Queue component which is a visual UI component that’s provided by the plUpload library. Web Connection provides a generic plUploadHandler class that can be used on the server side to capture uploaded files.

The examples show how to do this using the plUpload Queue component which is bulky UI control. Here’s what the full UI that you can drop on any form looks like:

You can check out this example and the following single file upload example on the Web Connection Samples page:


Single File Uploads

The full component is useful for batch uploads, but sometimes you don’t want or need this rather bulky UI to handle file uploads. For example I recently needed to add image uploads to the West Wind Message Board and in order to do this I really wanted a much simpler UI that simply triggers the image upload from a button:


There’s a simple button that when clicked allows you to pick a single file (although you can also have it pick multiples) and then immediately starts uploading the image to the server. When done the image URL returned is then embedded into the user’s message at the current cursor position.

Let’s see how we can do this using the simpler plUpload base API.

Using the base plUpload API

plUpload includes a number of different upload components but behind it all sits a base uploader API which doesn’t have any UI components associated with it. For my image uploader I don’t want any extraneous UI – I only want to upload the file and provide some basic progress information.

To do this we can use the core plUpload API. Here’s how this works.

Let’s start with the Add Image Dialog HTML which displays the modal dialog above:

<div id="InsertImageDialog" class="dialog" style="display: none; min-width: 320px; width: 80%"> <div class="dialog-header">Insert Image</div> <div class="dialog-content"> <label>Web Image Url: <small>(external source ie.Flickr,Imgur, DropBox etc.)</small></label> <input type="url" id="txtImageLink" style="width: 98%" /> <button type="button" id="btnImageSelection" class="bigbutton" style="margin-top: 5px;">Insert Web Image</button> <div id="container" style="margin-top: 30px;"> <label>Upload an Image:</label> <button id="btnUploadFile" onclick="return false;" class="bigbutton">Select file to Upload</button> </div> </div> </div>

Next we need to add the relevant plUpload script to the page.

<script src="scripts/plUpload/plupload.full.min.js"></script>    

Then we need to configure the plUpload Javascript code which can live in a script tag on the bottom of the page or as is the case here inside of a page level JavaScript file.

var uploader = new plupload.Uploader({ browse_button: 'btnUploadFile', // you can pass in id...
url: "ImageUpload.wwt",

runtimes: 'html5,flash,html4',
container: document.getElementById('container'), // ... or DOM Element itself url: "ImageUpload.wwt", chunk_size: '64kb', // Resize (downsize really) images on clientside if we can resize: { width: 1024, height: 768, quality: 85 }, filters: { max_file_size: '4mb', mime_types: [ { title: "Image files", extensions: "jpg,gif,png" } ] }, // Flash settings flash_swf_url: 'scripts/plupload/js/Moxie.swf', init: { PostInit: function () { }, FilesAdded: function (up, files) { // start uploading - we only accept one file uploader.start(); }, UploadProgress: function (up, file) { showStatus("Upload Progress: " + file.percent + "% complete",3000); }, Error: function (up, err) { showStatus("Upload failed: " + err.code + ": " + err.message); },

FileUploaded: function(up, file, response) { uploader.removeFile(file); var imageUrl = response.response; if (imageUrl) { markupSelection("<<img src=\"" + imageUrl + "\" />>"); $("#InsertImageDialog").modalDialog("hide"); } }, UploadComplete: function(up, files) { } } }); uploader.init();

The plUpload script code is pretty descriptive so not much explanation is needed. The most important propertie here is the browse_button property which is an id pointing at a button or link that when clicked triggers the image upload and the url property which points at the server target URL that will response to the plUpload file chunks that are sent to the server.

The interesting stuff happens in the event handlers.

Since I’m only dealing with a single file selection, I can use the FilesAdded event to immediately start the file upload under program control. This event fires whenever you select one or more files, and if you use a single button it makes sense to just kick off the upload without further user confirmation.

For progress and error information I use the ww.jquery.js showStatus()  function which is a quick and easy way to display status information on a status bar on the bottom of the form.

The most important piece though is the FileUploaded event which is used to actually confirm the file upload and capture the generated filename that the server saved. The function receives the upload component, the individual file object and and HTTP response object. The main thing we’re interested in the response property of the response object which provides a fully qualified image URL that points at the image that the server saved. This value is captured, an <img> tag created and then pasted into the text control at the current cursor position.

Handling the Image plUpload on the Server Side

As mentioned earlier Web Connection includes a plUploadHandler class that makes it pretty straight forward to handle uploads. The class basically works in conjunction with a wwProcess class and handles the plUpload file transfer chunks and puts the files together on the server in small chunks. This makes it possible for example to post files larger than 16 megs to the server as well as the file is sent in small chunks that are progressively appended to a file on the server.

To implement the server side you’ll create two methods:

  • A standard wwProcess EndPoint Method
  • An OnUploadComplete Event that is fired when the upload is complete

The first is the actual endpoint method that is referenced by the plUpload component. If you look back on the JavaScript configuration you see that it points at ImageUpload.wwt which translates to the following method in my wwThreads Process class:

FUNCTION ImageUpload()
*** Make sure plUploadHandler is loaded
LOCAL loUpload as plUploadHandler
loUpload = CREATEOBJECT("plUploadHandler")
*** Upload to temp folder
loUpload.cUploadPath = ADDBS(THIS.oConfig.cHtmlPagePath) + "temp"
    MD (loUpload.cUploadPath)
*** Constrain the extensions allowed on the server
loUpload.cAllowedExtensions = "jpg,jpeg,png,gif"
*** Process the file or chunk

This code creates a plUploadHandler component and tells it to store files uploaded in a temp subfolder. This is a temporary folder where files are uploaded to and then discarded before getting copied to a permanent location.

We then need to map the OnUploadComplete event by mapping it to another function OnImageUploadComplete() that will do the post processing and moving of our file. Finally we can specify the file extensions that are allowed for the image and then we’re ready to Process the current request with loUpload.ProcessRequest().

This method is called multiple times for each file uploaded. Files are uploaded in chunks so a 2meg file is broken into many smaller chunks that are sent and processed one at a time. When a file is completed the OnImageUploadComplete event is fired. Here’s what that looks like:

FUNCTION OnImageUploadComplete(lcFilename, loUpload)
LOCAL lcUrl, lcFile
lcUrl = this.ResolveUrl("~/temp/" + lcFileName)
*** Resize the image
lcFile = ADDBS(loUpload.cUploadPath) + lcFileName
*** Delete expired files - only for 10 minutes
DeleteFiles(ADDBS(JUSTPATH(lcFile)) + "*.*",600)
lcNewFile =  SYS(2015) + "." + JUSTEXT(lcFile)
lcNewPath = this.cHtmlPagePath + "PostImages\" + TRANSFORM(YEAR(DATETIME())) + "\"
IF !ISDIR(lcNewPath)
    MD (lcNewPath)
lcFileName = lcNewPath + lcNewFile
COPY FILE (lcFile) TO (lcFileName)
lcUrl = this.ResolveUrl("~/PostImages/" + + TRANSFORM(YEAR(DATETIME())) + "/" + lcNewFile)
lcUrl = "http://" + Request.GetServerName() + lcUrl
*** Write out the response for the client (if any)
*** In this case the URL to the uploaded image

The handler is passed the original file name (just the filename without a path) and the loUpload component.

For the message board I want to capture the file uploaded, rename it with a random name and then move it a more permanent folder – in this case PostImages/YEAR/. Once the file has been copied the original uploaded file in the temp folder can be deleted.

Finally the OnImageUploadComplete() method has to return the new URL to the client so that the client can link to the image. If you recall in the JavaScript we were sent a response object with a response property. The response property holds whatever we write out into the WriteCompletionResponse(). The most useful thing here almost always is the full URL to the resource that was uploaded if the client allows using that resource in some way. In the client application the URL is used to embed an image link into the user’s message text.

Quite a bit of Code, but easy to do

What’s described above is the entire process involved, which is not entirely trivial. There are a fair amount of moving parts in this code both on the client and on the server, but between plUpload and Web Connection’s plUploadHandler the actual code you have to write is pretty minimal. Most of what you see above is boiler-plate code that you can cut and paste into place and then only customize the actual result handlers when uploads are complete both on the server and client. Although it’s a fair bit of code overall the non boiler-plate code is minimal.

© Rick Strahl, West Wind Technologies, 2004 - 2016